Arquivo-web-crawler

Last updated a day ago.

What is Arquivo-web-crawler?

About

Arquivo-web-crawler is an archiver operated by Arquivo.pt. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us.

Track Arquivo-web-crawler Visiting Your Website
You can see when Arquivo-web-crawler visits your website using the API or WordPress plugin.

Detail

Operator Arquivo.pt
Documentation https://arquivo.pt/faq-crawling

Type

Archiver
Snapshots websites for historical databases

Expected Behavior

Archivers visit websites on a roughly regular cadence, since snapshots are more useful when they're regularly spaced out. Popular websites will have more frequent visits since they are more likely to be queried in the historical database in the future.

Insights

Arquivo-web-crawler's Activity on Your Website

Half of your traffic probably comes from artificial agents, and there are more of them every day. Track their activity with the API or WordPress plugin.

Set Up Agent Analytics

Other Websites

0%
of top websites are currently blocking Arquivo-web-crawler in some way
Learn How →

Access Control

Should I Block Arquivo-web-crawler?

It's up to you. Digital archiving is generally done to preserve a historical record. If you don't want to be part of that record for some reason, you can block archivers.

Using Robots.txt

User Agent Token Description
Arquivo-web-crawler Should match instances of Arquivo-web-crawler

You can block Arquivo-web-crawler or limit its access by setting user agent token rules in your website's robots.txt.

# robots.txt
# This should block Arquivo-web-crawler

User-agent: Arquivo-web-crawler
Disallow: /

Instead of doing this manually, you can use the API or Wordpress plugin to keep your robots.txt updated with the latest known AI scrapers, crawlers, and assistants automatically.

Set Up Automatic Robots.txt