The assessment phase is where we understand your business, your processes, your constraints, and your goals. It's where we figure out whether AI makes sense for you, which problems AI should solve first, and what approach will work in your organisational context. By the end of the assessment, we provide recommendations and a roadmap. But many organisations don't know what happens next. They have a recommendations document. Now what? How does a recommendation become a working system? How long does it take? Who does what work? What should they expect?

Transparency about implementation is one of the ways we build trust. We're going to tell you exactly what happens after the assessment, what each phase involves, what you need to prepare, what we handle, and how we know when we're done.

Phase One: Detailed Planning (2 to 4 weeks)

After you've reviewed the assessment recommendations and decided to move forward, we don't dive immediately into implementation. We do detailed planning first. This sounds like bureaucracy, but it's actually how we prevent implementation from going off the rails. Detailed planning means we create a specific implementation plan with milestones, deliverables, resource requirements, timelines, risk mitigation, and success criteria.

During detailed planning, we create a specific project charter that outlines what we're building, what success looks like, what constraints we're operating under, what resources we need from your team, and how long we estimate each phase will take. We identify which systems we need to connect to, what data we need to access, what access permissions we need to request. We create communication plans so everyone knows what's happening and who to contact with questions.

We also identify risks and mitigation strategies. What could go wrong? What's the plan if it does? If a key person on your team becomes unavailable, what happens? If connecting to a system takes longer than expected, do we have a backup plan? If the data is messier than we anticipated, how do we handle that? Identifying these risks upfront means we can prepare for them rather than discovering them mid-implementation and scrambling.

Your main contribution during detailed planning is review and approval of the plan, making sure our assumptions are correct, and committing resources. We need to know who's going to be involved from your team, how much time they can dedicate, who's going to be our point of contact, and who has authority to make decisions if we encounter issues.

Phase Two: Data Assessment and Preparation (3 to 8 weeks)

Most AI implementations depend heavily on data. Before we can build or configure the system, we need to understand your data, assess its quality, identify issues, and prepare it for the system. This is often where implementations run into surprises. The data is messier than people expected. Fields have inconsistent values. Historical data doesn't match current practices. Some records are missing critical information.

During this phase, we extract and audit data from your systems. We create reports showing data quality issues. We work with your team to understand which issues are actual problems that need fixing and which are acceptable quirks. We develop data cleaning scripts to fix fixable problems. We create data validation rules that the new system will use to catch problems going forward. We prepare data migration plans that move data from your current systems into the new one in a way that maintains data integrity and quality.

This is often the most tedious phase because data quality work doesn't look like progress. But it's the foundation for everything that comes after. A system built on clean, well-structured data will work well. A system built on messy data will constantly produce wrong results that need manual fixing.

TimeCraft Weekly
Get insights like this delivered weekly
AI and efficiency strategies for business leaders. One email per week.
No spam. Unsubscribe anytime.

Your involvement is providing context about the data. Why does this field sometimes have values that don't match the expected pattern? Is this a data entry error or is it valid? How should we handle missing values? Which data quality issues are acceptable and which must be fixed before implementation? Working through these questions with your team ensures the data is prepared correctly.

Phase Three: System Configuration and Testing (4 to 12 weeks)

This is where the system gets built or configured. If we're implementing an off-the-shelf tool, we configure it to match your workflows. We set up the data model so fields map correctly to your data. We configure rules about how data flows through the system. We set up integrations with your other systems. We build workflows that route work to the right people at the right time.

Throughout this phase, we run multiple rounds of testing. Unit testing checks individual functions. Integration testing checks whether the system connects correctly to your other systems. User acceptance testing (UAT) involves your team using the system with real or realistic data and checking whether it works as expected. We identify bugs and fix them. We identify gaps in functionality and add features. We run performance testing to make sure the system can handle your data volume.

This phase produces a system that works technically but hasn't been used by your team yet. It handles the data correctly. It integrates with your systems. It enforces your business rules. But your team hasn't seen it yet, and we haven't trained them on how to use it. That's the next phase.

Phase Four: Training and Change Preparation (2 to 4 weeks)

Once the system is tested and working, your team needs to learn how to use it. We create training materials that explain what the system does, how to use it, what workflows changed, and what the new work process looks like. We run training sessions where your team uses the system with our guidance. We create reference materials they can use after implementation when they have questions. We answer questions and clear up confusion.

This phase also involves change management. Using a new system means doing work differently. That's disruptive, even if the new way is better. We help your leadership communicate why the change is happening, what benefits it will bring, and how it will affect day-to-day work. We identify pockets of resistance and work with leadership to address them. We create feedback channels so your team can raise problems and concerns.

By the end of this phase, your team understands how the system works, is relatively comfortable using it, and knows where to get help if they have questions. They've used the system in a training environment and know what to expect when it goes live.

Phase Five: Go-Live and Stabilisation (2 to 6 weeks)

Go-live is when the system switches from testing to production. Your team starts using it for real work. This is often bumpy. People discover edge cases we didn't anticipate during testing. They find workarounds the system doesn't support. They accidentally discover bugs we missed. We're actively monitoring during go-live, responding quickly to problems, and making adjustments to keep things running.

Go-live can be a hard cutover (the new system is on and the old system is off immediately) or a soft cutover (run both systems in parallel for a period while you verify the new system is working correctly, then turn off the old system). Most implementations use some form of soft cutover for critical processes because it's lower risk. If something goes wrong with the new system, you can fall back to the old one while we fix the problem.

During stabilisation, we monitor the system performance, response times, error rates, and data quality. We track metrics showing whether the system is achieving what we expected. We continue fixing bugs and making adjustments. We help your team with problems and answer questions. By the end of stabilisation, the system is running reliably, your team is comfortable using it, and we're confident it's ready for normal operation.

Phase Six: Handover and Ongoing Support

Eventually, we transition from implementation to ongoing support. The system is stable, your team knows how to use it, and we've documented how it works. Handover involves creating final documentation, knowledge transfer sessions where we explain how to maintain the system, and establishing what ongoing support looks like. We identify which tasks your internal team can handle and which might need external support.

Most organisations need ongoing support in a few areas. First, monitoring and maintenance. The system needs to be checked regularly. Are there errors or anomalies? Are performance metrics still acceptable? Is the data quality still good? Someone on your team or an external team needs to monitor these metrics. Second, user support. New staff come on board and need training. Your team discovers new use cases and questions whether the system supports them. Someone needs to help troubleshoot. Third, evolution. Your business changes, the system needs to change with it. A new field is needed. A workflow changes. A new integration is required. Someone needs to implement these changes.

We typically offer support packages ranging from basic (email support, monthly check-ins) to comprehensive (onsite support, ongoing optimisation, regular strategy sessions). You choose what level of support makes sense for your situation. As your team builds internal expertise, you might reduce the level of external support because you can handle more yourself.

Timeline and Resource Requirements

The whole process from detailed planning through handover typically takes 4 to 6 months for a moderately complex implementation. Simple implementations might be done in 2 to 3 months. Complex implementations with multiple systems, significant data preparation, and substantial custom work might take 8 to 12 months or longer.

Your team's involvement varies by phase. During detailed planning, you need one or two senior people part-time. During data assessment, you need people who understand the data and the systems. During configuration, you need people who can explain your workflows and review the system design. During training, you need time for your team to learn. During go-live, you need people available to use the system, identify problems, and help us troubleshoot.

Our involvement also varies. We're heavily involved during data assessment, system configuration, and testing. We're moderately involved during training and go-live. After handover, our involvement drops significantly unless you've contracted for ongoing support.

Success Criteria and Handoff Conditions

We always define upfront what success looks like. Not vague statements like "the system works well" but specific metrics. The system processes 95% of work automatically without manual intervention. The time to complete a transaction is reduced by 75%. Error rates are below 2%. Customer satisfaction scores improve by 10%. These metrics tell us whether the implementation actually achieved its goals.

We hand off to your team only when success criteria have been met, your team is comfortable operating the system, documentation is complete, ongoing support processes are established, and everyone understands what comes next. A premature handoff creates problems. A delayed handoff wastes money. We aim to get the timing exactly right.

Frequently Asked Questions

How much time does our team need to dedicate?

It varies by phase and depends on complexity. Typically, you need 2 to 5 full-time equivalent people during implementation, with intensity varying by phase. Data assessment and configuration require the most involvement. Training and go-live require active participation. Detailed planning and ongoing support require less. We always clarify upfront exactly how much time we need from your team and what roles are involved. You need to plan for this availability or implementation will slow down.

What if we discover during implementation that we need to change direction?

This happens. During detailed planning, we build flexibility into the plan for minor adjustments. If you discover that your original goal isn't quite right, we adjust. If the system isn't working the way you expected and you want to change how it works, we can do that. The key is that changes need to happen early and you need to understand the cost impact. Changes discovered in week six cost less to implement than changes discovered in week sixteen. We build change management into our process so you can adjust when you need to, but you know the impact.

What happens if something goes wrong during go-live?

We have contingency plans. If the new system has critical problems, we can fall back to the old system while we fix the problems. We don't force you to use a broken system. We stay engaged and work intensively to fix problems quickly. Most implementations have some bumps during go-live, but rarely anything catastrophic. Our experience and testing phases are designed to identify and fix problems before they become critical production issues.