From News Scraper to AI Dashboard in 90 Days
The story of how a news aggregation service pivoted to become a multi-model AI productivity platform.

February 3, 2023. The first commit to what would become our AI dashboard was a news fetcher. RSS feeds in, AI summaries out, archives stored in Google Cloud Storage.
We called the repository “ai-news-api.” Within 90 days, the news features were deleted, the project was renamed “dashboard-api,” and we were building something entirely different. That pivot – from content pipeline to AI productivity platform – was the fastest strategic shift in our company’s history.
Week one: a news aggregation service
The original idea was straightforward. AI was getting better at summarization. News was abundant but overwhelming. Build a service that fetches feeds, processes them through OpenAI, and delivers curated digests.
The first week of commits tells the story of a focused product: MongoDB connection setup, RSS feed fetching, news processing endpoints, GitHub Actions for CI/CD, Nginx configuration for deployment. We had a working news aggregation pipeline within days.
Google Cloud Storage was added for archiving – zip processed articles and push them to a bucket. The thinking was that news archives would be valuable for enterprise clients who needed historical AI-curated data.
It was a clean product concept. It was also wrong.
March: the infrastructure was better than the product
By March, something interesting was happening. The infrastructure we’d built to support news processing was more capable than the news product warranted.
We’d added a task queue manager with RPM rate limiting to handle OpenAI’s API constraints. A document composition system with templates for formatting output. Stripe billing with subscription plans. These were serious backend components for a news digest service.
The task queue, in particular, was overbuilt for its original purpose. It could manage concurrent API calls across multiple providers, retry on failure, respect rate limits per provider, and report progress. We’d built a general-purpose AI orchestration layer and were using it to summarize blog posts.
That mismatch between infrastructure capability and product scope was the first signal that we were building the wrong thing.
April: the chat experiment
In April, we added a chat feature. GPT-3.5 had been available since March, and the integration was straightforward – we already had OpenAI in the stack for news summarization.
The chat feature was supposed to be secondary. A way for users to ask questions about their news feeds. Instead, it immediately became the most-used part of the product.
The technical challenge was context management. GPT-3.5 had a 4,096 token limit. Our chat implementation needed context trimming – a system that tracked conversation history, estimated token usage, and trimmed older messages to stay within limits. We built a sliding window approach that preserved the system prompt and the most recent exchanges while dropping middle messages.
That context trimming system, born from a 4K token constraint, evolved into the conversation management architecture that still powers our platform today. Constraints have a way of producing lasting solutions.
The same month, we added email authentication. The AuthEmailComponent was our first step toward multi-user support. A news service doesn’t need user accounts. A productivity platform does.
May: the pivot becomes undeniable
May 2023 was the month the news product officially died.
We built “Bulk Chat” – what we’d later recognize as research agents. Users could give the AI a research task, and it would plan subtasks, execute Google searches, scrape web pages, synthesize findings, and return structured results with citations. This was autonomous agent behavior, though we didn’t use that language at the time.
The news fetcher was removed. The RSS generator was removed. Features that had shipped in February and March were deleted from the codebase in May. Three months from launch to kill.
The project was renamed from “ai-news” to “dashboard-api.” That rename was more than cosmetic. It was a public admission that the product had changed. Renaming a repository breaks links, invalidates deployment configs, and forces everyone on the team to update their mental model. We did it anyway because leaving the old name would have been dishonest about what we were building.
The framework detour
Not every decision during the pivot was correct. We briefly adopted the Bishop framework for our API layer, thinking a more structured approach would help as the product grew. It didn’t fit. The abstraction layer added complexity without proportional benefit for our use case.
We replaced Bishop with raw Koa.js plus OpenAPI. Less opinionated, more flexible, and better aligned with the rapid iteration speed we needed during the pivot. The Bishop experiment lasted about a month. The Koa architecture lasted two years.
The lesson: when you’re pivoting, minimize the number of things you’re changing simultaneously. We were already changing the product, the user model, and the billing structure. Adding a new API framework on top of that was one variable too many.
The 34 contributors
Dashboard v1 eventually accumulated 2,892 commits from 34 contributors over roughly three years. Marina Gonokhina was the dominant contributor with 1,346 commits – nearly half the total.
That contributor distribution tells you something about the project’s character. This wasn’t a broad open-source effort with hundreds of casual contributors. It was a focused team where a handful of engineers owned the majority of the codebase. The 34 number includes designers, DevOps engineers, QA testers, and part-time contributors who tackled specific features.
The concentrated ownership had advantages. Architectural consistency. Fast decision-making. Deep institutional knowledge of every system. It also had risks – bus factor being the obvious one. We mitigated that with documentation, code reviews, and the kind of aggressive commenting that comes from a team that knows it’s building something complex.
The TVBS signal
One early validation came from an unexpected source: TVBS, a Taiwanese broadcaster. They found the dashboard during its news-processing phase and stayed through the pivot.
We built a pre-approved domains system for them – a way to restrict which origins could access the API. It was an enterprise requirement that pushed us toward proper multi-tenant architecture earlier than we’d planned.
Enterprise clients do that. They show up with requirements you hadn’t considered, and meeting those requirements makes the product more robust for everyone. The organization-level access controls we built for TVBS became the foundation for our full multi-tenant system.
From 90 days to three years
The pivot took 90 days. The product that emerged took three years to mature.
After the May 2023 rename, Dashboard v1 grew rapidly. Document composition with 100+ templates. Knowledge bases with vector search. Custom chatbots that could be embedded on external sites. Image generation with Stable Diffusion and DALL-E. Claude integration alongside GPT. Organization management with role-based access controls. Pro Search with web scraping.
By the end of 2024, the dashboard supported 30+ AI models through direct integrations and OpenRouter, served organizations with multi-user access, and managed knowledge bases with RAG-powered document retrieval. It was, by any measure, a serious product.
None of which was planned in February 2023, when we were building a news scraper.
The courage to delete
The hardest part of the pivot wasn’t building new features. It was deleting working ones.
The news fetcher worked. The RSS generator worked. The Google Cloud Storage archive pipeline worked. Engineers had spent weeks building and testing these systems. Deleting them felt like waste.
It wasn’t waste. It was research. Building the news pipeline taught us that our infrastructure could handle concurrent AI API calls at scale. The task queue we built for news processing became the backbone of the chat system. The document composition templates we designed for news digests evolved into the 100+ templates in the productivity dashboard.
Every feature we killed contributed something to the product that replaced it. The code was deleted, but the knowledge wasn’t.
The lesson for any team facing a similar pivot: don’t evaluate killed features by their direct contribution. Evaluate them by what building those features taught the team. If the answer is “nothing,” you have a bigger problem than a failed feature. If the answer is “we learned how to build X, which enabled Y,” then the feature served its purpose – even if its purpose wasn’t the one you originally intended.
We went from news scraper to AI dashboard in 90 days. We went from AI dashboard to agent platform in another 30 months. The next pivot, whenever it comes, will be faster. That’s what building teaches you: not what to build, but how quickly you can change direction when the market tells you to.
Alexey Suvorov
CTO, AIWAYZ
10+ years in software engineering. CTO at Bewize and Fulldive. Master's in IT Security from ITMO University. Builds AI systems that run 100+ microservices with small teams.
LinkedIn