Web Scrapping



  1. Python Web Scraping
  2. Web Scraping Open Source
  3. Web Scraping Tools

A growing number of business activities and our lives are being spent online, this has led to an increase in the amount of publicly available data. Web scraping allows you to tap into this public information with the help of web scrapers.

In the first part of this guide to basics of web scraping you will learn –

  1. What is web scraping?
  2. Web scraping use cases
  3. Types of web scrapers
  4. How does a web scraper work?
  5. Difference between a web scraper and web crawler
  6. Is web scraping legal?

Nov 25, 2020 Web scraping is an automated method used to extract large amounts of data from websites. The data on the websites are unstructured. Web scraping helps collect these unstructured data and store it in a structured form. There are different ways to scrape websites such as online Services, APIs or writing your own code.

  • Web scraping is an automated method used to extract large amounts of data from websites. The data on the websites are unstructured. Web scraping helps collect these unstructured data and store it in a structured form. There are different ways to scrape websites such as online Services, APIs or writing your own code.
  • Loading Web Pages with 'request' The requests module allows you to send HTTP requests using.
  • Well, what you have done is web scraping. At the micro-level, web scraping is simply the act of collecting data from the internet, in any form. However, at the macro-level, web scraping allows you to collect data in large volumes by using bots.

What is web scraping?

Web scraping automates the process of extracting data from a website or multiple websites. Web scraping or data extraction helps convert unstructured data from the internet into a structured format allowing companies to gain valuable insights. This scraped data can be downloaded as a CSV, JSON, or XML file.

Web scraping (or Data Scraping or Data Extraction or Web Data Extraction used synonymously), helps transform this content on the Internet into structured data that can be consumed by other computers and applications. The scraped data can help users or businesses to gather insights that would otherwise be expensive and time-consuming.

Since the basic idea of web scraping is automating a task, it can be used to create web scraping APIs and Robotic Process Automation (RPA) solutions. Web scraping APIs allow you to stream scraped website data easily into your applications. This is especially useful in cases where a website does not have an API or has a rate/volume-limited API.

Uses of Web Scraping

People use web scrapers to automate all sorts of scenarios. Web scrapers have a variety of uses in the enterprise. We have listed a few below:

  • Price Monitoring –Product data is impacting eCommerce monitoring, product development, and investing. Extracting product data such as pricing, inventory levels, reviews and more from eCommere websites can help you create a better product strategy.
  • Marketing and Lead Generation –As a business, to reach out to customers and generate sales, you need qualified leads. That is getting details of companies, addresses, contacts, and other necessary information. Publicly information like this is valuable. Web scraping can enhance the productivity of your research methods and save you time.
  • Location IntelligenceThe transformation of geospatial data into strategic insights can solve a variety of business challenges. By interpreting rich data sets visually you can conceptualize the factors that affect businesses in various locations and optimize your business process, promotion, and valuation of assets.
  • News and Social MediaSocial media and news tells your viewers how they engage with, share, and perceive your content. When you collect this information through web scraping you can optimize your social content, update your SEO, monitor other competitor brands, and identify influential customers.
  • Real EstateThe real estate industry has myriad opportunities. Including web scraped data into your business can help you identify real estate opportunities, find emerging markets analyze your assets.
Learn More

How to get started with web scraping

There are many ways to get started with web scraper, writing code from scratch is fine for smaller data scraping needs. But beyond that, if you need to scrape a few different types of web pages and thousands of data fields, you will need a web scraping service that is able to scrape multiple websites easily on a large scale.

Custom Web Scraping Services

Many companies build their own web scraping departments but other companies use Web Scraping services. While it may make sense to start an in house web scraping solution, the time and cost involved far outweigh the benefits. Hiring a custom web scraping service ensures that you can concentrate on your projects.

Web scraping companies such as ScrapeHero, have the technology and scalability to handle web scraping tasks that are complex and massive in scale – think millions of pages. You need not worry about setting up and running scrapers, avoiding and bypassing CAPTCHAs, rotating proxies, and other tactics websites use to block web scraping.

Web Scraping Tools and Software

Scrapping

Point and click web scraping tools have a visual interface, where you can annotate the data you need, and it automatically builds a web scraper with those instructions. Web Scraping tools (free or paid) and self-service applications can be a good choice if the data requirement is small, and the source websites aren’t complicated.

ScrapeHero Cloud has pre-built scrapers that in addition to scraping search engine data, can Scrape Job data, Scrape Real Estate Data, Scrape Social Media and more. These scrapers are easy to use and cloud-based, where you need not worry about selecting the fields to be scraped nor download any software. The scraper and the data can be accessed from any browser at any time and can deliver the data directly to Dropbox.

Scraping Data Yourself

You can build web scrapers in almost any programming language. It is easier with Scripting languages such as Javascript (Node.js), PHP, Perl, Ruby, or Python. If you are a developer, open-source web scraping tools can also help you with your projects. If you are just new to web scraping these tutorials and guides can help you get started with web scraping.

If you don't like or want to code, ScrapeHero Cloud is just right for you!

Skip the hassle of installing software, programming and maintaining the code. Download this data using ScrapeHero cloud within seconds.

How does a web scraper work

A web scraper is a software program or script that is used to download the contents (usually text-based and formatted as HTML) of multiple web pages and then extract data from it.

Web scrapers are more complicated than this simplistic representation. They have multiple modules that perform different functions.

What are the components of a web scraper

Web scraping is like any other Extract-Transform-Load (ETL) Process. Web Scrapers crawl websites, extracts data from it, transforms it into a usable structured format, and loads it into a file or database for subsequent use.

A typical web scraper has the following components:

1. Crawl

First, we start at the data source and decide which data fields we need to extract. For that, we have web crawlers, that crawl the website and visit the links that we want to extract data from. (e.g the crawler will start at https://scrapehero.com and crawl the site by following links on the home page.)

The goal of a web crawler is to learn what is on the web page, so that the information when it is needed, can be retrieved. The web crawling can be based on what it finds or it can search the whole web (just like the Google search engine does).

2. Parse and Extract

Extracting data is the process of taking the raw scraped data that is in HTML format and extracting and parsing the meaningful data elements. In some cases extracting data may be simple such as getting the product details from a web page or it can get more difficult such as retrieving the right information from complex documents.

You can use data extractors and parsers to extract the information you need. There are different kinds of parsing techniques: Regular Expression, HTML Parsing, DOM Parsing (using a headless browser), or Automatic Extraction using AI.

3. Format

Now the data extracted needs to be formatted into a human-readable form. These can be in simple data formats such as CSV, JSON, XML, etc. You can store the data depending on the specification of your data project.

The data extracted using a parser won’t always be in the format that is suitable for immediate use. Most of the extracted datasets need some form of “cleaning” or “transformation.” Regular expressions, string manipulation, and search methods are used to perform this cleaning and transformation.

4. Store and Serialize Data

After the data has been scraped, extracted, and formatted you can finally store and export the data. Once you get the cleaned data, it needs to be serialized according to the data models that you require. Choosing an export method largely depends on how large your data files are and what data exports are preferred within your company.

This is the final module that will output data in a standard format that can be stored in Databases using ETL tools (Check out our guide on ETL Tools), JSON/CSV files, or data delivery methods such as Amazon S3, Azure Storage, and Dropbox.

ScrapeHero crawls, parses, formats, stores and delivers the data for no additional charge.

Web Crawling vs. Web Scraping

People often use Web Scraping and Web Crawling interchangeably. Although the underlying concept is to extract data from the web, they are different.

Web Crawling mostly refers to downloading and storing the contents of a large number of websites, by following links in web pages. A web crawler is a standalone bot, that scans the internet, searching, and indexing for content. In general, a ‘crawler’ means the ability to navigate pages on its own. Crawlers are the backbones of search engines like Google, Bing, Yahoo, etc.

A Web scraper is built specifically to handle the structure of a particular website. The scraper then uses this site-specific structure to extract individual data elements from the website. Unlike a web crawler, a web scraper extracts specific information such as pricing data, stock market data, business leads, etc.

Is web scraping legal?

Although web scraping is a powerful technique in collecting large data sets, it is controversial and may raise legal questions related to copyright and terms of service. Most times a web scraper is free to copy a piece of data from a web page without any copyright infringement. This is because it is difficult to prove copyright over such data since only a specific arrangement or a particular selection of the data is legally protected.

Legality is totally dependent on the legal jurisdiction (i.e. Laws are country and locality specific). Publicly available information gathering or scraping is not illegal, if it were illegal, Google would not exist as a company because they scrape data from every website in the world.

Terms of Service

Although most web applications and companies include some form of TOS agreement, it lies within a gray area. For instance, the owner of a web scraper that violates the TOS may argue that he or she never saw or officially agreed to the TOS

Some forms of web scraping can be illegal such as scraping non-public data or disclosed data. Non-public data is something that isn’t reachable or open to the public. An example of this would be, the stealing of intellectual property.

Ethical Web Scraping

If a web scraper sends data acquiring requests too frequently, the website will block you. The scraper may be refused entry and may be liable for damages because the owner of the web application has a property interest. An ethical scraping tool or professional web scraping services will avoid this issue by maintaining a reasonable requesting frequency. We talk in other guides about how you can make your scraper more “polite” so that it doesn’t get you into trouble.

What’s next?

Let’s do something hands-on before we get into web page structures and XPaths. We will make a very simple scraper to scrape Reddit’s top pages and extract the title and URLs of the links shared.

Check out part 2 and 3 of this post in the link here – A beginners guide to Web Scraping: Part 2 – Build a web scraper for Reddit using Python and BeautifulSoup

Web Scraping Tutorial for Beginners – Part 3 – Navigating and Extracting Data – Navigating and Scraping Data from Reddit

We can help with your data or automation needs

Turn the Internet into meaningful, structured and usable data



NextStep 2019 was an exciting event that drew professionals from multiple countries and several sectors. One of our most popular technical sessions was on how to scrape website data. Presented by Miguel Antunes, an OutSystems MVP and Tech Lead at one of our partners, Do iT Lean, this session is available on-demand. But, if you prefer to just quickly read through the highlights…keep reading, we’ve got you covered!

As developers, we all love APIs. It makes our lives that much easier. However, there are times when APIs aren’t available, making it difficult for developers to access the data they need. Thankfully, there are still ways for us to access this data required to build great solutions.

What Is Web Scraping?

Web scraping is the act of pulling data directly from a website by parsing the HTML from the web page itself. It refers to retrieving or “scraping” data from a website. Instead of going through the difficult process of physically extracting data, web scraping employs cutting-edge automation to retrieve countless data points from any number of websites.

If a browser can render a page, and we can parse the HTML in a structured way, it’s safe to say we can perform web scraping to access all the data.

Benefits of Web Scraping and When to Use It

You don’t have to look far to come up with many benefits of web scraping.

  • No rate-limits: Unlike with APIs, there aren’t any rate limits to web scraping. With APIs, you need to register an account to receive an API key, limiting the amount of data you’re able to collect based on the limitations of the package you buy.
  • Anonymous access: Since there’s no API key, your information can’t be tracked. Only your IP address and cool keys can be tracked, but that can easily be fixed through spoofing, allowing you to remain perfectly anonymous while accessing the data you need.
  • The data is already available: When you visit a website, the data is public and available. There are some legal concerns regarding this, but most of the time, you just need to understand the terms and conditions of the website you’re scraping, and then you can use the data from the site.

How to Web Scrape with OutSystems: Tutorial

Regardless of the language you use, there’s an excellent scraping library that’s perfectly suited to your project:

  • Python: BeautifulSoup or Scrapy
  • Ruby: Upton, Wombat or Nokogiri
  • Node: Scraperjs or X-ray
  • Go: Scrape
  • Java: Jaunt

OutSystems is no exception. Its Text and HTML Processing component is designed to interpret the text from the HTML file and convert it to an HTML Document (similar to a JSON object). This makes it possible to access all the nodes.

It also extracts information from plain text data with regular expressions, or from HTML with CSS selectors. You’ll be able to manipulate HTML documents with ease while sanitizing user input against HTML injection.

But how does web scraping look like in real life? Let’s take a look at scraping an actual website.

Python Web Scraping

We start with a simple plan:

  • Pinpoint your target: a simple HTML website;
  • Design your scraping theme;
  • Run and let the magic happen.

Scraping an Example Website

Our example website is www.bank-code.net, a site that lists all the SWIFT codes from the banking industry. There’s a ton of data here, so let’s get scraping.

This is what the website looks like:

If you want to collect these SWIFT codes for an internal project, it will take hours to copy it manually. With scraping, extracting the data will take a fraction of that time.

  • Navigate to your OutSystems personal environment, and start a new app (if you don't have one yet, sign-up for OutSystems free edition);
  • Choose “Reactive App”;
  • Fill in your app’s basic information, including its name and a description of the app to continue;
  • Click on “Create Module”;
  • Reference the library you’re going to use from the Forge component, which in this case is the “Text and HTML Processing” library;
  • Go to the website and copy the URL, for example: https://bank-code.net/country/PORTUGAL-%28PT%29/100. We’re going to use Portugal as a baseline for this tutorial;
  • In the OutSystems app, create a REST API for integration with the website. It’s basically just a “get request”, and place the copied URL;
  • If you noticed we have the pagination offset already present in the URL, it’s the “/100” part. Change that to be a REST input parameter;
  • Out of our set of actions, we’ll use the ones designed to work with HTML, which in this case, are Attributes or Elements. We can send the HTML text of the website to these actions. This will return our HTML document, the one mentioned before that looks like a JSON object where you can access all the nodes of the HTML.

Now we can create our action to scrape the website. Let’s call it “Scrape”, for example.

  • Use the endpoint previously created, which will gather the HTML. We’ll parse this HTML text into our document;
  • Going back to the website, in Chrome, right-click on the page where the content is that you’d like scraped. Click on “Inspect” and in the subsequent section, identify the table you’d like to scrape;
  • Since the table has its own ID, it will be unique across the HTML text, making it easy to identify in the text;
  • Since we now have the table, we really want to get all the rows in this table. You can easily identify the selector for the row by expanding the HTML till you see the rows and right click in one of them - Copy - Copy Selector, and this will give you “#tableID > tbody > tr:nth-child(1)” for the first row. And since we want all of them, we’re going to use “#tableID > tbody > tr”;
  • You have now all the elements for the table rows. It’s time to iterate all rows and get to select all the columns;
  • Now, select the column’s text, using the HTML document and the Selector from the last action, in addition to our column selector: “> td:nth-child(2)” is the selector for the second column which contains the Bank Name. For the other columns, you just need to iterate the “child(n)” node.

Since you have scraped all the information, check if you already have the code on our database. If we have it, we just need to update the data. If we don’t have it, we’ll just create the record. This should provide us with all the records for the first page of the website when you hit 1-Click Publish.

The process above is basically our tool for parsing the data from the first page. We identify the site, identify the content that we want, and identify how to get the data. This runs all the rows of the table and parses all the text from the columns, storing it in our database.

For the full code used in this example, you can go to the OutSystems Forge and download it from there.

Web Scraping Enterprise Scale: Real-Life Scenario - Frankort & Koning

So, you may think that this was a nice and simple example of scraping a website, but how can you apply this at the enterprise level? To illustrate this tool’s effectiveness at an enterprise-level, we’ll use a case study of Frankort & Koning, a company we did this for.

Frankort & Koning is a Netherlands-based fresh fruit and vegetable company. They buy products from producers and sell them to the market. As these products trade in fresh produce, there are many regulations that regulate their industry. Frankfort & Koning needs to check each product that they buy to resell.

Imagine how taxing it would be to check each product coming into their warehouse to make sure that all the producers and their products are certified by the relevant industry watchdog. This needs to be done multiple times per day per product.

GlobalGap has a very basic database, which they use to give products a thirteen-digit GGN (Global Gap Number). This number identifies the producer, allowing them to track all the products and determine if they're really fresh. This helps Frankort & Koning certify that the products are suitable to be sold to their customers. Since Global Gap doesn't have any API to assist with this, this is where the scraping part comes in.

To work with the database as it is now, you need to enter the GGN number into the website manually. Once the information loads, there will be an expandable table at the bottom of the page. Clicking on the relevant column will provide you with the producer’s information and whether they’re certified to sell their products. Imagine doing this manually for each product that enters the Frankort & Koning warehouse. It would be totally impractical.

How Did We Perform Web Scraping for Frankort & Koning?

We identified the need for some automation here. Selenium was a great tool to set up the automation we required. Selenium automates user interactions on a website. We created an OutSystems extension with Selenium and Chrome driver.

This allowed Selenium to run Chrome instances on the server. We also needed to give Selenium some instructions on how to do the human interaction. After we took care of the human interaction aspect, we needed to parse the HTML to bring the data to our side.

The instructions Selenium needed to automate the human interaction included identifying our base URL and the 'Accept All Cookies' button, as this button popped up when opening the website. We needed to identify that button so that we could program a click on that button.

We also needed to produce instructions on how to interact with the collapse icon on the results table and the input where the GGN number would be entered into. We did all of this to run on an OutSystems timer and ran Chrome in headless mode.

We told Selenium to go to our target website and find the cookie button and input elements. We then sent the keys, as the user entered the GGN number, to the system and waited a moment for the page to be rendered. After this, we iterated all the results, and then output the HTML back to the OutSystems app.

This is how we tie together automation and user interaction with web scraping.

These are the numbers we worked with, with Frankort & Koning:

  • 700+ producers supplying products
  • 160+ products provided each day
  • 900+ certificates - the number of checks they needed to perform daily
  • It would’ve taken about 15 hours to process this information manually
  • Instead, it took only two hours to process this information automatically

This is just one example of how web scraping can contribute to bottom-line savings in an organization.

Still Got Questions?

Web Scraping Open Source

Just drop me a line! And in the meantime, if you enjoyed my session, take a look at the NextStep 2020 conference, now available on-demand, with more than 50 sessions presented by thought leaders driving the next generation of innovation.

Web Scraping Tools

Related posts