Opinion / May 1, 2026
The CTF scene is dead.
Frontier AI has broken the open CTF format. The scoreboard does not measure human skill cleanly anymore, and the old game is not coming back.
What makes me qualified to say this?
I started playing CTFs in 2021, the same year I started university. My first CTF was HCKSYD, a 48-hour solo CTF. I full solved it and won in 2 hours. I was completely hooked. That led me to win DownUnderCTF, Australia's largest CTF, with Blitzkrieg multiple times. Blitzkrieg was one of Australia's strongest teams at the time. I later joined TheHackersCrew, an international top-tier team that was consistently ranked highly on CTFTime, the main global ranking and event calendar the scene uses as its scoreboard. With them, I competed in some of the most prestigious CTFs in the world, consistently placing well within the top 10 until the end of 2025.
I am not saying this because I dislike CTFs. I am saying it because CTFs were the thing that made me fall in love with security. They taught me how to learn, gave me a way to measure myself, and introduced me to many of the people I respect most in the field. Watching people pretend the format is still fine is frustrating because the old game is not there anymore.
What changed?
As AI tools ramped up in capability, especially when GPT-4 first came out, a significant percentage of medium difficulty CTF challenges started becoming one-shottable, meaning a single prompt from a user could produce the solve and flag. You could paste a cryptography challenge into ChatGPT, come back in 10 minutes, and have the solution. At the time, we did not think too much of it. Hard challenges went mostly untouched, and the time save was not large enough to ruin the competition.
The issue was never that AI could help. CTF players have always used tools. The issue is when the model does the reasoning, writes the solve, and leaves the human with nothing meaningful to do besides copy the flag.
Enter Claude Opus 4.5
When Opus 4.5 dropped, the tone changed. Almost every medium difficulty challenge, and some hard challenges, became agent-solvable. Claude Code packaged everything into a CLI and made it easy to connect other CLI and MCP tools. It became trivial to build an orchestrator that used the CTFd API to spin up a Claude instance for every challenge. You could let the system run for the first hour, then only start working on whatever was left.
That changed the game. Teams that refused to use AI were not just missing a convenience; they were playing a slower version of the competition. Open online CTFs started becoming a question of how quickly you could automate the easy and medium work, then how much human attention you had left for the hardest challenges. The scoreboard started measuring orchestration and willingness to use frontier models alongside, and sometimes above, security skill.
The effects were obvious. The CTFTime leaderboard started feeling wrong. Some legendary teams that were consistently near the top appeared less often. Player activity felt lower. Challenge developers who treated CTFs as an artform had less reason to spend weeks building something beautiful if it was going to be eaten by an agent in minutes.
GPT-5.5 seals the deal
I have been working heavily with GPT-5.5 and GPT-5.5 Pro after launch. By benchmark metrics, 5.5 is close to Claude Mythos' capability, and Pro likely surpasses it. These models can one-shot Insane difficulty active leakless heap pwn challenges on HackTheBox. They can solve a large portion of what a smaller CTF organiser can realistically produce. If you orchestrate Pro against Insane challenges in a 48-hour CTF, there is a good chance you get the flag before the event ends.
That makes open CTFs pay-to-win. The more tokens you can throw at a competition, the faster you can burn down the board. Specialised cybersecurity models like alias1 by Alias Robotics are becoming less relevant compared to general frontier LLMs. The competition is turning into "who can afford to run enough agents, with enough context, for long enough."
CTFs feel much more like a cheesable mess than a competition. Your performance in a CTF no longer defines your skill the way it used to. Recruiting security practitioners by CTF performance is becoming less meaningful. It is not even a particularly good measure of AI skill, because most of the orchestration needed for CTFs is already open source or vibe codeable.
The "beginners are fine" take
I have seen various takes that beginners can still learn from CTFs as they always have. These takes miss the scoreboard. CTFs were not just a set of puzzles. They were a ladder. Even as a beginner, you had something to climb. You could see yourself improve, solve more challenges, place higher, join better teams, and become more competitive over time.
That feedback loop is breaking. If the visible scoreboard is dominated by teams using AI, a beginner is pushed toward using AI before they have built the instincts the AI is replacing. That is an anti-pattern. It prevents active learning, and active struggle is the bit that actually teaches you. It is also completely demotivating to put in real effort and see no visible progress because the ladder above you has been automated.
It also changes what challenge authors want to build. If beginner CTFs become another place where people quietly paste prompts and climb a scoreboard, authors have more reason to put their effort into learning platforms instead. At least on platforms like picoGym and HackTheBox, the expectation is education, and beginners are less incentivised to cheat themselves out of learning.
Beginners are better off using picoGym, HackTheBox, and other lab environments where the point is actually learning instead of pretending the public scoreboard still reflects human growth.
"CTF isn't dead"
I have seen some hopium posts about how CTF is not dead, it is just augmented by AI. They often point at CTFs like DEF CON to argue that AI still cannot solve everything. That is true, but it is the wrong defence.
The hardest top-tier finals have very few participants, and they are usually gated behind qualifiers that are easier than the finals themselves. If those qualifiers fall to agents, fewer genuinely qualified people reach the challenges that still resist AI. A tiny number of elite finals does not save the open online format that most people actually play.
The claim is not that every challenge is solved. The claim is that enough of the scoreboard has been automated that the scoreboard no longer means what it used to mean.
The "AI is useful for security research" take
CTFs were never meant to be security research. They can showcase new and interesting techniques, but the CTF itself is not the point of discovery. Just because AI is useful within a field does not mean it belongs in the competitive landscape of that field.
In CTFs, unrestricted AI removes the human from the puzzle almost entirely and reduces the art of security to a prompt. Sure, LLMs will keep getting better at security as long as CTFs are around, but that does not mean the competitive format is healthy. CTFs were an artform, a way to share techniques with nerds, and a way to push the human bounds of security skill. That purpose is being stripped away.
The "LLMs are chess engines for cyber" take
Chess has been dominated by computers for well over a decade. People use chess engines as an analogy for LLMs in CTFs, but they miss the point: chess engines are not allowed during competitive play. They are used for analysis, training, commentary, and practice. They enrich the game around the competition without replacing the person competing.
Imagine giving every competitive chess player the best chess engine and letting them use it freely during matches. Would that be considered fair? Would it be fun to watch? Would it justify prize pools? Would it push the human limits of what could be achieved in chess? The same questions apply to CTFs.
Organisers can't fight back
CTF organisers have tried techniques to break or deter LLM solutions, but they are temporary friction at best. Claude Code does not meaningfully care about old refusal-string tricks anymore. Frontier models are getting better at noticing prompt injections. Web search capabilities weaken challenges based on technologies released after the training cutoff. Rules that ask people not to use LLMs are ignored and almost impossible to enforce in open online events.
That leaves organisers in a bad position. If they make normal challenges, agents solve too much. If they make challenges deliberately hostile to agents, the challenges often become guessy, overengineered, or unpleasant for humans too. That is not a real fix. It just makes CTFs worse for everyone.
"just adapt bro"
This take is infuriating. People I have always looked up to in the community have said it. To me, it is completely nonsensical unless you explain what we are adapting into.
If adaptation means building better tooling, CTF players already did that. If adaptation means writing harder challenges, organisers already tried. If adaptation means accepting that the scoreboard is now an AI orchestration benchmark, then we should say that honestly instead of pretending the old competition still exists.
Even if organisers create guessier or more overengineered challenges that current LLMs cannot solve, there are no good paths for players to learn the required skills while staying competitive. A few models from now, that point may be irrelevant anyway. The trajectory of LLM security capability is moving too quickly for challenge design to stay ahead for long.
The aftermath
The scene that grew my love for CTFs is emptying out. The CTFTime leaderboard has almost no semblance of history or human skill anymore. The 2026 scoreboard is unrecognisable compared to every year before it. TheHackersCrew, alongside many other large and reputable teams, either do not play, play with far fewer people, or struggle to cut into the top 10. Unregulated cheating is through the roof. Some of the best CTFs, like Plaid CTF, are not running anymore.
These sentiments are not only mine. Many members of my local team, Emu Exploit, feel similarly. These are people who consistently attend the International Cybersecurity Championship, perform at the top level in bug bounty programmes, compete in Pwn2Own, and present at conferences including Black Hat. The people losing interest are not casual observers. They are exactly the kind of people the scene used to produce and retain.
The fun of CTFing is gone for many of the people who cared most. The loss is not just a scoreboard. It is the ladder from beginner curiosity to elite competition. It is the craft of challenge design. It is the feeling that a clever human solved something difficult because they understood it deeply.
That legacy is not being carried forward by open online CTFs in their current form. The format is dead. Something else may replace it, but pretending nothing fundamental has changed only makes the loss harder to talk about honestly. It also gives AI shills more room to capitalise on the decline by selling mediocre wrappers back to the community that made the training data valuable in the first place.
What now?
While a lot of what's happening in the CTF/AI space is super commercialised and out of our control, CTF has had a hugely positive impact on the industry. I have met so many kind, smart, and passionate people through CTFs. I have played some of the most beautifully crafted challenges and found some of the most intriguing unintended solutions.
The community around CTFing has been an amazing place to learn, grow, and connect. That's something we shouldn't lose, no matter where the competition goes. As a community, we should strive to stay together and build new avenues to stay passionate and keep learning. Security-adjacent social events like SecTalks, student conferences, and local meetups are great ways to stay connected and stay involved. Learning platforms and the communities they provide through platforms like Discord are also a valuable resource.
While it may be a struggle to find an alternative to what we had, the amazing community we have built around it is more important now more than ever as we find new ways to keep the competitive spirit alive.