most read
Software Engineering
Why We Killed Our End-to-End Test Suite Sep 24
Software Engineering
The value of canonicity Oct 30
Culture & Values
The Spark Of Our Foundation: a letter from our founders Dec 9
Careers
We bring together great minds from diverse backgrounds who enable discussion and debate and enhance problem-solving.
Learn more about our careers



Author: Jacqueline Carvalho
Introduction: A new language for the AI era
Technology has always reshaped how we think, build, and communicate. From command lines to graphical interfaces, every leap forward demanded a new kind of literacy. The rise of generative AI is no different, it invites us to learn a new language: the language of prompts.
At Nubank, we see this not just as a technical skill, but as a fundamental shift in how we collaborate with technology. This is where prompt engineering comes in: the art and science of crafting instructions that make AI systems truly useful.
The clearer we express intent, context, and nuance, the more effective AI can amplify our work. This process mirrors everyday human interaction: we rephrase questions, add context, and iterate until we get a useful response. Optimizing prompts to elicit better responses simply applies the same conversational skill to machines.
Check our job opportunities
What Is Prompt Engineering?
Prompt engineering is the discipline of designing, structuring, and refining inputs so that an AI system can produce accurate, relevant, and controllable outputs. In essence, it’s not about “making the AI smarter”, it’s about conveying what we actually mean of an LLM (Large Language Model) as an extremely capable but literal-minded assistant. If you say “write a summary” it will, but of what length? For whom? In what tone? Without these clues, the model must guess, and that’s where inconsistency creeps in.
That’s the backbone of every good prompt. It’s the difference between saying “summarize this report” and saying “summarize this report in 5 bullet points focusing on financial risks, using concise business language.” The second one sets expectations, boundaries, and intent, and gets you far better results.
Prompt engineering, then, is not just a trick to hack the model, it’s the foundation of AI collaboration. You’re not commanding the model; you’re designing a dialogue.
Key principles behind effective prompting
1. Structure of a good prompt
Just like good code needs structure, good prompts need order. A simple mental model is the aforementioned four-part structure:
For example: “You are a policy analyst. Review the following report on digital payments in Latin America and write a one-page executive summary focusing on consumer trust and regulatory frameworks. Use clear, neutral language.”
This structure ensures precision and reproducibility, two things every developer values.
2. Role prompting: “Act as a…”
One of the simplest yet most powerful tools is role prompting, telling the model who it should be. This instantly changes the voice, expertise, and assumptions behind its response.
For instance: “You are a cybersecurity consultant advising a fintech startup. Explain the top three security risks in simple terms for a non-technical founder.”
The “Act as…” instruction gives the model a lens, much like setting up an environment variable for the conversation. It defines expertise, scope, and audience at once.
You can even stack roles for more nuanced reasoning: “Act as both a financial analyst and a UX researcher. Analyze this digital wallet flow for potential risks and user friction.”
3. Anchoring with context
LLMs are context-dependent, they don’t know what you know unless you tell them. Anchoring is about feeding the right background so the model can ground its reasoning.
For example: “Based on the following company culture statement and mission, draft a welcome message for new hires that feels authentic to our tone.”
Or, in data-heavy scenarios: “Using the data below, generate insights on customer churn trends by region.”
Anchoring ensures relevance. Without it, the model’s output might sound generic or detached from reality. In enterprise environments, this technique becomes even more important when connecting prompts with internal documents or APIs, as seen in RAG (Retrieval-Augmented Generation) systems.
Going beyond basics: Modular, RAG, and iterative design
Once you’ve mastered structure, role, and context, the next step is scalability. How do you make prompts that are consistent, maintainable, and efficient across teams and workflows? That’s where modular prompting, RAG, and the experimentation loop come in.
1. Modular prompting: Reusable building blocks
Modular design means breaking prompts into reusable components like functions in code. You define building blocks for roles, tone, format, and structure that can be recombined as needed.
For example, your prompt library might contain modules like:
Then, you can mix and match: [ROLE] Data Scientist + [TASK] Analyze customer churn + [FORMAT] Markdown table + [TONE] Clear.
This modularity boosts productivity and keeps outputs consistent especially in collaborative teams or automated workflows.
Think of it as DRY for prompting: Don’t Repeat Yourself.
2. Retrieval-Augmented Generation (RAG): Grounding AI with real data
Even the best prompts can’t create accurate answers if the model lacks the right data. That’s where RAG comes in, a method that combines retrieval (fetching relevant context from a database or knowledge base) with generation (the model’s text output).
For example: “A support chatbot retrieves internal documentation about an API before answering a question, ensuring its response is factually correct and up to date.”
RAG transforms prompting from pure text instruction into a data-driven reasoning process. It’s especially valuable in industries like finance, healthcare, and education, where accuracy and context are non-negotiable.
3. The experimentation loop: Test, iterate, refine
Prompt engineering is inherently experimental. No matter how elegant your first version is, you’ll refine it — just like debugging or optimizing code.
A simple loop for improvement looks like this:
1️⃣ Write a prompt → 2️⃣ Test with examples → 3️⃣ Analyze output → 4️⃣ Tweak and retest
You can log performance metrics, compare versions (A/B testing), and use evaluation methods like BLEU, ROUGE, or LLM-based scoring.
Iterative prompting is how organizations build reliable AI systems not through magic, but through method.
Why Prompt Engineering is here to stay
Some say prompt engineering will fade as models get better. In reality, the opposite is happening.
As models grow more capable, the complexity of what we ask from them also grows. We now want AI to reason, summarize, code, translate tone, detect bias, and integrate with structured data. And that requires sophisticated human design behind the scenes.
Prompt engineering is not a fad. It’s the interface layer between natural language and computation. It shapes not only accuracy but also ethics, usability, and creativity.
In practice, that means:
In short, prompting is becoming a core skill across disciplines, like knowing how to Google effectively, but on a far more powerful scale.
At Nubank, we use prompt engineering and continuous fine-tuning to improve our AI agents. In risk management, we iteratively refine prompts that guide LLMs to support the New Product and Features (NP&F) risk assessment process, evaluate the data quality of issues (i.e. risk identified by the risk management team in relation to product launches), and automate routine tasks. With human-in-the-loop review and testing, these prompts increase the risk management team’s efficiency, help scale up processes, and accelerate new products and features launches.
Beyond the commands: Prompting as communication
At its core, prompt engineering is about communication between humans and machines, but also between teams, disciplines, and ideas.
When we learn to write better prompts, we’re not just training AI. We’re refining how we express thought, structure reasoning, and share context clearly.
That mirrors how we work at Nubank: clarity first, collaboration always. Good prompts aren’t just technically correct — they’re humanly clear.
Conclusion: The future speaks in prompts
Prompt engineering is not the end goal, it’s a step in the evolution of how we think and build with technology.
At Nubank, we see every new tool as an opportunity to simplify complexity. Generative AI is no different, it’s another way to empower people through clarity, creativity, and better communication.
And as this new language keeps evolving, one thing remains constant: the clearer we are with our intent, the more powerful our technology becomes.
Recommended Reading & Resources
Check our job opportunities