AI risks: security and exposure of sensitive data
Sending code to a cloud LLM potentially means sending your secrets, business logic and customer data to a third party. This risk is real and underestimated.
Read article →Thoughts on software development, UX, agility and the best practices of our craft.
Sending code to a cloud LLM potentially means sending your secrets, business logic and customer data to a third party. This risk is real and underestimated.
Read article →When a team delegates its technical thinking to AI, it risks losing something irreplaceable: a deep understanding of its own system.
Read article →Claude Code changes how we prototype software ideas. With a few well-crafted prompts, you can generate a functional base to iterate on — without spending time on tedious initialization.
Read article →AI-generated code is plausible before it is correct. This subtle difference creates a new type of risk: bugs that pass code review because the code looks reasonable.
Read article →2024 was the year of the AI tools deluge for developers. In 2025, the question is no longer whether to use them, but how to use them effectively without losing fundamental skills.
Read article →When an LLM generates code, who owns it? Can it reproduce code under a protected license? These questions are not theoretical — they engage the legal responsibility of your teams.
Read article →AI does not replace developers who keep learning — it replaces those who have stopped. Building a continuous learning routine is more urgent than ever.
Read article →Classical computing rests on determinism: same input, same output, every time. Search engines began blurring this principle. LLMs have abandoned it entirely. What are the implications for developers?
Read article →