Search Crawler
A no-code way to index external content into Zendesk search

My role
User Research & Discovery
UX & Interaction Design
Collaboration & Dev Handoff
Testing & Iteration
Team
Product Manager
Lead Designer (me)
Engineering Lead
Engineering Team
Company
Zendesk
Zendesk
Year
2020
Background and problem
Imagine being a CX manager who needs consistent information available across multiple systems — or a support agent who wants to share helpful content in a ticket, but that information lives outside the Zendesk ecosystem.
In both cases, finding and sharing the right content is fragmented, slowing down response times and creating inconsistent customer experiences.
Solution
Search Crawler is part of a broader Zendesk initiative to make all relevant knowledge—whether public or internal—searchable from a single source.
By allowing admins to index external content into Zendesk, users can find and surface information from any system without needing to switch tools or build custom integrations.
This initiative aimed to:
Enable a no-code experience for content managers
Reduce dependency on engineering teams
Deliver a seamless search experience across multiple content sources
Discovery & research
Since Federated Search API integrations required significant technical effort, our goal was to understand customer needs before building.
Together with my Product Manager (PM), I created a research plan and conducted moderated interviews with 8 enterprise customers.
Key insights:
Strong demand for federated search across internal and external content
Most teams needed to index up to 1,000 URLs or pages
APIs were a major barrier — engineering resources were limited, and teams wanted autonomy over which content to include
These findings validated the need for a no-code setup flow and clear content control mechanisms.
Design Process
🧭 1. Revamp Search Settings information architecture (IA)
The previous Search Settings page only supported Help Center sources, so I redesigned it into a modular and scalable IA to support new features like Crawlers.
Introduced separate sections for Sources, Featured Articles, Crawlers, and Filters
Enabled clear navigation and easier discovery through “Manage” entry points
Created a scalable layout ready for future search capabilities
Before: Old search settings: cluttered, rigid, and not built for growth
Before: Old search settings: cluttered, rigid, and not built for growth
After: Scalabel IA that can support more actions inside the search settings
After: Scalabel IA that can support more actions inside the search settings
🪜 2. Stepper Flow for Setup
Setting up a crawler required completing several dependent actions (e.g. defining URLs, scheduling, authentication).
A stepper flow was introduced to:
Break down a complex setup into clear, sequential steps
Prevent errors by locking next steps until required inputs were completed
Provide contextual guidance at each step, reducing dependency on documentation
This guided setup gave non-technical users confidence to complete configurations independently.






📊 3. Crawler tabular dashboard
Once crawlers were created, users needed visibility into performance and status.
I designed a central tabular dashboard summarizing:
Active crawlers and their latest indexing time
Content volume (pages indexed, failed, pending)
Quick access to pause, edit, or delete crawlers
The tabular dashboard became the main control center for admins to monitor content indexing at scale while the edit page help troubleshooting or edit crawler.




💡4. Insights from Beta Testing
Before general availability (GA) we contacted 8 enterprise beta customers to test the Crawler experience end to end. The goal was to ensure the new setup flow, feedback messages, and error states felt clear and reliable in real-world conditions.


User testing key takeaways:
Setup complexity: Users needed a clearer step-by-step flow to complete configuration confidently.
Communication gaps: Progress and status updates were often unclear, leading to uncertainty during long indexing times.
Error handling: Error messages needed to be more visible and actionable to reduce dependency on technical teams.
Content record limits: Several enterprise users reached the maximum number of indexed pages, revealing the need to increase the content record cap for larger sites.
These insights guided the final refinements, making the Crawler simpler and more approachable for non-technical users.
🛡️ 5. Designing for Trust: Clear Feedback & Error Communication
After the customer interviews, it became clear that unclear error messages and lack of feedback were major pain points.
Users often didn’t know whether the crawler was running, delayed, or failed — creating confusion and support tickets. To address this, we focused on error states and communication clarity, working closely with our content designer to ensure messages were consistent, human, and actionable.
Together, we built:
Clear success, error, and inline messages using a unified formula and tone
Email and in-product alerts to keep admins informed during long indexing or verification processes
These improvements turned complex technical issues into understandable, transparent feedback, giving users confidence that the system was reliable and easy to use.
Errors and success states


Email notification logic


Impact
The new Crawler experience transformed how enterprise customers connect content across tools:
Reduced setup time from days of engineering work to under 1 hour using a no-code flow
Increased discoverability of external content by ~40% in early beta usage
Enabled autonomy for CX teams to manage and control indexed sources without developer involvement
Established a scalable IA that supported future integrations within the Zendesk search ecosystem
Beyond metrics, the project strengthened Zendesk’s positioning as a platform for unified knowledge management, empowering large organizations to surface the right answers faster — wherever the information lives.
Learnings
The Crawler project evolved over several quarters and involved multiple teams, making alignment and continuity crucial throughout the process.
To ensure collaboration and continuous improvement:
We introduced a Decision Log Template shared with PM and Engineering
Documented all major UX decisions for future contributors
Collected user feedback after beta testing to refine setup clarity and improve indexing feedback loops
Projects
Projects
My role
User Research & Discovery
UX & Interaction Design
Collaboration & Dev Handoff
Testing & Iteration
COMPANY
Because
Team
Product Manager
Lead Designer (me)
Engineering Team
Year
2020


Search Crawler
A no-code way to index external content into Zendesk search
Solution
Search Crawler is part of a broader Zendesk initiative to make all relevant knowledge—whether public or internal—searchable from a single source.
By allowing admins to index external content into Zendesk, users can find and surface information from any system without needing to switch tools or build custom integrations.
Discovery & research
Since Federated Search API integrations required significant technical effort, our goal was to understand customer needs before building.
Together with my Product Manager (PM), I created a research plan and conducted moderated interviews with 8 enterprise customers.
Key insights:
Strong demand for federated search across internal and external content
Most teams needed to index up to 1,000 URLs or pages
APIs were a major barrier — engineering resources were limited, and teams wanted autonomy over which content to include
Design Process
🧭 1. Revamp Search Settings information architecture (IA)
The previous Search Settings page only supported Help Center sources, so I redesigned it into a modular and scalable IA to support new features like Crawlers.
Introduced separate sections for Sources, Featured Articles, Crawlers, and Filters
Enabled clear navigation and easier discovery through “Manage” entry points
Created a scalable layout ready for future search capabilities
Background and problem
Imagine being a CX manager who needs consistent information available across multiple systems — or a support agent who wants to share helpful content in a ticket, but that information lives outside the Zendesk ecosystem.
In both cases, finding and sharing the right content is fragmented, slowing down response times and creating inconsistent customer experiences.
🪜 2. Stepper Flow for Setup
Setting up a crawler required completing several dependent actions (e.g. defining URLs, scheduling, authentication).
A stepper flow was introduced to:
Break down a complex setup into clear, sequential steps
Prevent errors by locking next steps until required inputs were completed
Provide contextual guidance at each step, reducing dependency on documentation
After: Scalabel IA that can support more actions inside the search settings
Before: Old search settings: cluttered, rigid, and not built for growth
🛡️ 5. Designing for Trust: Clear Feedback & Error Communication
After the customer interviews, it became clear that unclear error messages and lack of feedback were major pain points.
Users often didn’t know whether the crawler was running, delayed, or failed — creating confusion and support tickets. To address this, we focused on error states and communication clarity, working closely with our content designer to ensure messages were consistent, human, and actionable.
Together, we built:
Clear success, error, and inline messages using a unified formula and tone
Email and in-product alerts to keep admins informed during long indexing or verification processes
📊 3. Crawler tabular dashboard
Once crawlers were created, users needed visibility into performance and status.
I designed a central tabular dashboard summarizing:
Active crawlers and their latest indexing time
Content volume (pages indexed, failed, pending)
Quick access to pause, edit, or delete crawlers
The tabular dashboard became the main control center for admins to monitor content indexing at scale while the edit page help troubleshooting or edit crawler.






💡4. Insights from Beta Testing
Before general availability (GA) we contacted 8 enterprise beta customers to test the Crawler experience end to end. The goal was to ensure the new setup flow, feedback messages, and error states felt clear and reliable in real-world conditions.
User testing key takeaways:
Setup complexity: Users needed a clearer step-by-step flow to complete configuration confidently.
Communication gaps: Progress and status updates were often unclear, leading to uncertainty during long indexing times.
Error handling: Error messages needed to be more visible and actionable to reduce dependency on technical teams.
Content record limits: Several enterprise users reached the maximum number of indexed pages, revealing the need to increase the content record cap for larger sites.






Impact
The new Crawler experience transformed how enterprise customers connect content across tools:
Reduced setup time from days of engineering work to under 1 hour using a no-code flow
Increased discoverability of external content by ~40% in early beta usage
Enabled autonomy for CX teams to manage and control indexed sources without developer involvement
Established a scalable IA that supported future integrations within the Zendesk search ecosystem
Beyond metrics, the project strengthened Zendesk’s positioning as a platform for unified knowledge management, empowering large organizations to surface the right answers faster — wherever the information lives.
Learnings
The Crawler project evolved over several quarters and involved multiple teams, making alignment and continuity crucial throughout the process.
To ensure collaboration and continuous improvement:
We introduced a Decision Log Template shared with PM and Engineering
Documented all major UX decisions for future contributors
Collected user feedback after beta testing to refine setup clarity and improve indexing feedback loops
Errors and success states


Email notification logic











