Frances @ Waxell
We ran a 69-article help center rewrite using one Connect table — no external tool, no status meetings, no copy-paste. Here's the structure that made it work.

I ran a full rewrite of our help center this year — 69 articles, six collections, a multi-step review process — without opening a single project management tool. I used a table in Waxell Connect. That was it.
I'm still a little surprised this worked.
A Connect table is a structured data object in a workspace: rows and columns, just like a spreadsheet, except agents can read from it, write to it, and update rows as they complete work. For a bounded project with a defined set of items moving through stages, a table can be the entire coordination layer — no external software required.
Here's how the rewrite ran.
Before this, I had a problem I kept solving badly
The old pattern: I kept project status in a task tracker, and whenever I needed AI help with any of it, I'd manually copy the relevant rows into an AI chat session. The agent only knew what I pasted. Nothing about the broader project existed in the session — which articles were blocked, which had priority flags, what the collection structure looked like. I was handing the agent a fragment.
The problem isn't that copy-paste is slow (it is). It's that the agent is always working from an incomplete picture, so what it produces fits the pasted rows but not the actual project.
With the old workflow, every Cowork session started with a copy-paste, and every update had to be typed back. The agent did good work, then I closed the tab, and the next morning I started over: paste, brief, work, update, repeat.
The table structure
I created a table in Connect called help-center-rewrite with six columns:
Article (text) — the article title
Collection (select) — one of six: Getting Started, Campaigns, Account & Billing, Integrations, Troubleshooting, Advanced
Status (select) —
todo,in-progress,needs-review,donePriority (select) —
high,normal,holdAssignee (select) —
agentorfrancesNotes (text) — blockers, edge cases, revision direction
69 rows. Six columns. That's the whole project.
How it ran
The rewrite happened in batches. Each session, I opened Cowork and specified the help-center workspace — Cowork enters it and reads the workspace files automatically before I type a word. The agent could see the full table: which articles were still todo, what collection they belonged to, what priority they carried. It would pick a batch, draft the revisions, and when it finished each article it updated the row — in-progress → needs-review — adding a note in the Notes column if anything needed my attention.
I'd review the drafts and either give direction or approve them.
When I approved an article, the agent used the Intercom Connector to push the updated content directly to our help center. No logging into Intercom, no finding the article manually, no copy-pasting the revised text into the editor. The connector handled it — the article was updated in Intercom, and the row moved to done. Not drafted-and-done. Actually published.
That last step is what closed the loop. Before the connector was in the workflow, approval still meant I had to go into Intercom and apply the change myself. With it, my job at that stage is: read the draft, decide yes or no. If yes, the agent publishes it and updates the table. If no, I leave a note in the Notes column and it goes back to in-progress.
No status meetings. No re-explaining what was still left. No "wait, did we finish the billing section?" The table knew. Every session started from the current state — not from whatever I managed to paste together the day before.
This is how it works in my setup, using Cowork as my interface for Connect. Connect is also accessible via API and web UI — if you're accessing Connect programmatically or through your own agent tooling, the table and connector work the same way.
What actually changed
The thing I didn't expect: the table made the project legible in a way my old task tracker never did, even though the tracker had more features.
Watching a column of 69 rows tick from todo to needs-review to done — you're looking at the project, not a dashboard's interpretation of it. I could see at a glance which collection was stalled, which articles were held for external input, where the agent was ahead of me and where it was waiting.
And with the Intercom Connector in the flow, done actually meant done. There was no backlog of approved-but-not-yet-published articles sitting in a doc somewhere. Approval was the last decision I made. Everything after that was the agent's job.
Knowledge workers spend roughly 60% of their time on what Asana calls "work about work" — chasing status updates, switching between tools, manually reconciling what's current. The table didn't solve that through clever architecture. It solved it by being the one place where project state lived, readable by both me and the agents doing the work. No reconciliation gap. No version where the tracker says one thing and the agent's working copy says another.
A small addition mid-project
Halfway through, I added an initiative column — a text field to tag each article with a broader goal: "SEO refresh," "support ticket reduction," "onboarding clarity." A single column, but it changed how the agent prioritized. An article tagged "support ticket reduction" got handled differently than one tagged "SEO refresh." Not because I wrote different instructions for each — the context was already in the table.
This is the kind of thing that happens when project context lives in a place agents can actually read. You stop writing long briefs. You add a column.
What you could build with this pattern
The help center rewrite was a bounded project: finite set of items, defined stages, clear done state. That description fits most projects. A bug tracker. A content calendar. An affiliate outreach list. A set of interview candidates. Anything where you're tracking a fixed number of things through a multi-step process.
The setup is the same regardless: define your columns (item, status, priority, notes at minimum), add your rows, connect it to a workspace where agents can read it. If the destination is a connected platform — a help center, a CMS, a CRM — you can close the loop completely. The agent drafts, you approve, the connector publishes. Your job is the decision, not the execution.
You're not setting up a project management tool. You're setting up a project. That's a shorter job.
FAQ
What is a Connect table and how is it different from a spreadsheet?
A Connect table is a structured data object in a workspace — rows and columns like a spreadsheet, but built into the agent-readable workspace layer rather than a file you open separately. An AI agent can query it, filter by column, read individual rows, and update fields as it completes work. A spreadsheet is something a human reads and manually updates. The operational difference matters when you're running agents that need to know project state without anyone pasting it into a prompt.
Do I need a separate project management tool alongside Connect to run a project this way?
For bounded projects with a defined scope and clear stages, a Connect table can be the entire coordination layer. The limitation is visibility features — if you need Gantt charts, resource capacity planning, or cross-project portfolio views, Connect tables won't replicate a full project management platform. But for a self-contained project running on an autonomous workflow, the table is usually enough.
How does an agent know what to work on next without a human assigning tasks?
The agent reads the table and applies filtering logic from the workspace playbook. In the help center setup, the rule was simple: pick the next three todo articles at high priority, work through them, update the rows when done. The agent didn't need explicit assignment — it needed table access and a clear selection rule.
What is the Intercom Connector and what does it do in this workflow?
The Intercom Connector is a Connect integration that gives agents direct access to Intercom's API — in this case, the ability to update help center articles. In the rewrite workflow, once I approved a draft, the agent used the connector to push the changes directly to the correct Intercom article. The row in the table moved to done only after the article was actually live. Without the connector, approval would have still required me to open Intercom, find the article, and apply the changes manually. The connector removed that step entirely.
What happens when someone else needs to check project status?
They read the table. Anyone with workspace access sees the current state without needing a status report, a meeting, or a message asking what's left. This isn't a sophisticated feature — it's just what happens when status lives in one place and stays current.
Can Connect tables handle projects larger than 69 items?
The table structure doesn't impose a meaningful limit. The more relevant constraint is cognitive: very large projects benefit from subdivision, either into multiple tables (one per phase or collection) or into sub-workspaces. For the help center rewrite, 69 rows in a single table was the right granularity. For something with 400 items, I'd probably split by phase.
Is this workflow only available through Cowork?
No. My workflow runs through Cowork as my interface for Connect, which is why I describe it in terms of opening sessions and entering workspaces. Connect tables are accessible via the Connect API and web UI — you can read, filter, and update rows programmatically or through any agent tooling that connects to Connect. The table structure and behavior are the same regardless of how you access it.
Sources
Asana. "Context Switching Is Killing Your Productivity." https://asana.com/resources/context-switching
Breeze. "Project Management Statistics You Need to Know (2026)." https://www.breeze.pm/blog/project-management-statistics
Agentic Governance, Explained




