Skip to content

Web application that scrapes various websites for data related to the Mission to Mars and displays the information in a single HTML page using BeautifulSoup, Requests, Selenium and MongoDB with Flask. | UT Data Analysis and Visualization Nov 2019 - May 2020.

Notifications You must be signed in to change notification settings

mtbulut/Web-Scraping

Repository files navigation

Mission to Mars

mission_to_mars

In this assignment, you will build a web application that scrapes various websites for data related to the Mission to Mars and displays the information in a single HTML page. The following outlines what you need to do. NOTE: All of this can be done using requests.get() and converting the content into soup using BeautifulSoup and html.parser

Step 1 - Scraping

Complete your initial scraping using Jupyter Notebook, BeautifulSoup, Pandas, and Requests/Splinter.

  • Create a Jupyter Notebook file called mission_to_mars.ipynb and use this to complete all of your scraping and analysis tasks. The following outlines what you need to scrape.

NASA Mars News

  • Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
# Example:
news_title = "NASA's Next Mars Mission to Investigate Interior of Red Planet"

*BONUS: The news_p was something that didn't come through because of page loading. You might have to try this with Selenium or Splinter
news_p = "Preparation of NASA's next spacecraft to Mars, InSight, has ramped up this summer, on course for launch next May from Vandenberg Air Force Base in central California -- the first interplanetary launch in history from America's West Coast."
*END BONUS.

### JPL Mars Space Images - Featured Image

* Visit the url for JPL Featured Space Image [here](https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars).

* Use requests to get the site and find the image url for the current Featured Mars Image and assign the url string to a variable called `featured_image_url`.

* Make sure to find the image url to the full size `.jpg` image.

* Make sure to save a complete url string for this image.

```python
# Example:
featured_image_url = 'https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA16225_hires.jpg'

Mars Weather

  • Visit the Mars Weather twitter account here and scrape the latest Mars weather tweet from the page. Save the tweet text for the weather report as a variable called mars_weather.
# Example:
mars_weather = 'Sol 1801 (Aug 30, 2017), Sunny, high -21C/-5F, low -80C/-112F, pressure at 8.82 hPa, daylight 06:09-17:55'

Mars Facts

  • Visit the Mars Facts webpage here and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.

  • Use Pandas to convert the data to a HTML table string.

Mars Hemispheres

  • Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar's hemispheres.

  • You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.

  • Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.

  • Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.

*BONUS Start: There are some bugs with request whereby it doesn't wait for the whole page to load here. As a bonus, you can try to use Selenium (preferred) or Splinter to get all four links.

# Example:
hemisphere_image_urls = [
    {"title": "Valles Marineris Hemisphere", "img_url": "..."},
    {"title": "Cerberus Hemisphere", "img_url": "..."},
    {"title": "Schiaparelli Hemisphere", "img_url": "..."},
    {"title": "Syrtis Major Hemisphere", "img_url": "..."},
]

*BONUS End.


Step 2 - MongoDB and Flask Application

Use MongoDB with Flask templating to create a new HTML page that displays all of the information that was scraped from the URLs above.

  • Start by converting your Jupyter notebook into a Python script called scrape_mars.py with a function called scrape that will execute all of your scraping code from above and return one Python dictionary containing all of the scraped data.

  • Next, create a route called /scrape that will import your scrape_mars.py script and call your scrape function.

    • Store the return value in Mongo as a Python dictionary.
  • Create a root route / that will query your Mongo database and pass the mars data into an HTML template to display the data.

  • Create a template HTML file called index.html that will take the mars data dictionary and display all of the data in the appropriate HTML elements. Use the following as a guide for what the final product should look like, but feel free to create your own design.

final_app_part1.png final_app_part2.png


Hints

  • Use Requests to get the sites when needed and BeautifulSoup to help find and parse out the necessary data.

  • Use Pymongo for CRUD applications for your database. For this homework, you can simply overwrite the existing document each time the /scrape url is visited and new data is obtained.

  • Use Bootstrap to structure your HTML template.

Copyright

Trilogy Education Services © 2017. All Rights Reserved.

About

Web application that scrapes various websites for data related to the Mission to Mars and displays the information in a single HTML page using BeautifulSoup, Requests, Selenium and MongoDB with Flask. | UT Data Analysis and Visualization Nov 2019 - May 2020.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published