Anthropic Academy: Artificial Intelligence Assistant

The next course on my Anthropic Academy learning list covered Artificial Intelligence assistance and how to integrate it into a coding workflow. It focused on the behind-the-scenes mechanics of an AI assistant – the underlying architecture, implementation techniques, context management, and extended functionality – and how they enable interaction with developers and support software development.

While the course covered the theory in detail, it also had a hands-on section. I loved being able to work through an example to see Claude in action. This blog will cover the concepts of an AI assistant and summarize what I learned through the hands-on example.

If Claude were a person…

Artificial intelligence is used in a conversational manner – the user gives a prompt (usually a question or a command), and the language model generates a response based on what it can gather from the data it learned from. Sounds pretty simple, but it’s a far more complex process.

This course highlighted that language models are powerful readers but not strong doers. It’s like having a brilliant consultant that you can only reach through snail mail, who has no access to your office. You can send them a letter with instructions, and they can receive it, read it, and send a letter back. However, when you ask them to perform tasks, like creating a report based on data in your file cabinet, they have no way to do so.

Hence, the need for AI assistants like Claude. An AI assistant is the agent framework that surrounds the LLM and aids it by gathering context, formulating a plan, and taking action to do the requested task. So, in our consultant scenario, the AI assistant would deliver the letter, read the consultant's response (probably with a list of requirements for solving the problem), do the legwork (e.g., find the files in the cabinet), and send information and updates back to the consultant. This process is repeated until the consultant can give a final, complete answer. By handling tasks (doing the legwork) through tool use, the simple text-generating model becomes a powerful assistant.

You’ve probably interacted with a chatbot before and run into issues providing the right context to get a more accurate answer. With Claude Code embedded in your project, it has access to everything it needs for deep context awareness, allowing you to build an assistant that better understands your needs and more accurately responds to your prompts, resulting in a more efficient development process.

Claude Code in Action

Once Claude has analyzed your project, it can be prompted to create a context file called CLAUDE.md that summarizes the project's purpose and architecture, including important commands, critical files, and coding patterns and structure. Claude uses this file as a guide when analyzing user prompts within the project. This file is customizable (e.g., reference files, specific instructions, interacting with other agents, etc.), creating a solid repeatable foundation for responses. This is also where Agent Skills come in, which I cover in more detail in this blog.

The customization doesn’t stop there. Even though Claude comes with built-in commands for giving it instructions, you can also create custom commands tailored to your project's context. This is very beneficial for automating frequently run, repetitive tasks, providing consistency with the right context and flexibility through arguments.

Ever lost your thought during a conversation with a friend? This can also happen with Claude during complex prompts. To keep Claude on the right track, you can stop a processing response by hitting escape to redirect the conversation, capture repeated mistakes in memory to reference during conversations, rewind conversations to jump back to an earlier point, or manage the conversation context. Learning how to use these conversation control techniques will make responses more efficient and accurate.

Another powerful use of Claude is implementing changes. Instead of having to manually go through every file to make sure your changes don’t break anything downstream, you can hand the reins to Claude instead. Here are some options to use when steering Claude during prompts to make changes:

1.      Screenshots give Claude a clear reference of what you want to change

2.      Planning Mode makes Claude explore your project in depth before implementing changes

3.      Effort level controls Claude’s reasoning process; lower effort is faster and uses fewer tokens, higher effort takes longer to process. Using ultrathink with effort level tells Claude it should take more time to think through a response for a single prompt without affecting the effort of the session’s prompts.

But wait, there’s more

Claude Code’s capabilities don’t stop there. You can connect it to MCP servers to provide new tools and abilities not natively available with Claude. There are a variety of servers designed to address specific needs, such as database integration, API testing, file system operations, cloud service integrations, and more. You can also integrate it into GitHub Actions to handle pull requests – it will review the request, create task plans to address issues, and generate a detailed report.

Claude can also use hooks, but this topic will be covered in detail in another blog.

With so many customizable options, integrations, and capabilities, it’s no wonder Claude is such a powerful AI assistant. I used the small example project in this course to work through the steps, but I look forward to creating a project in the future to implement Claude Code and customize it for my needs.

Author:
Lorraine Ferrusi
Powered by The Information Lab
1st Floor, 25 Watling Street, London, EC4M 9BR
Subscribe
to our Newsletter
Get the lastest news about The Data School and application tips
Subscribe now
© 2026 The Information Lab