Overview:
Veeva Web2PDF is a free web solution that converts dynamic digital content, such as websites, to PDFs. These rendered PDFs allow faster, more accurate review and approval.
When converting a Web Page to PDF using Web2PDF, it shows an error:
Veeva Web2PDF cannot generate a PDF for {URL} per your request. The robots.txt file on {URL} prevents Veeva Web2PDF crawlers from accessing pages on the site. Please update the robots.txt file if you would like Veeva Web2PDF to crawl the site.
Root Cause:
The robots.txt file on {URL} prevents Veeva Web2PDF crawlers from accessing pages on the site.
Solution:
Updating the Robots.txt is a standardized file across websites and is really trivial to make changes to. The customer's website administrators need to follow the instructions listed in the pages below to make the corresponding change in the Robots.txt file.
The following is a helpful guide from Google about Robots.txt: Introduction to robots.txt
For the Veeva Web2PDF crawler, the user-agent is VeevaWeb2PDFCrawler and the instruction for the entry in the robots.txt file is found in this article What is robots.txt, and how does Veeva Web2PDF handle it?
Related Documentation:
- Vault Help Documentation: About Veeva Web2PDF
- Vault Help Documentation: Configuring Veeva Web2PDF User Actions