Web Crawler
In this section, we present how to use a web crawler within MindsDB.
A web crawler is a computer program or automated script that browses the internet and navigates through websites, web pages, and web content to gather data. Within the realm of MindsDB, a web crawler can be employed to harvest data, which can be used to train models, domain specific chatbots or fine-tune LLMs.
Connection
This handler does not require any connection parameters.
Here is how to initialize a web crawler:
If you installed MindsDB locally via pip, you need to install all handler dependencies manually. To do so, go to the handler’s folder (mindsdb/integrations/handlers/web_handler) and run this command: pip install -r requirements.txt
.
Usage
Get Websites Content
Here is how to get the content of docs.mindsdb.com
:
You can also get the content of internal pages. Here is how to fetch the content from 10 internal pages:
Another option is to get the content from multiple websites.
Get PDF Content
MindsDB accepts file uploads of csv
, xlsx
, xls
, sheet
, json
, and parquet
. However, you can utilize the web crawler to fetch data from pdf
files.
For example, you can provide a link to a pdf
file stored in Amazon S3.