![](/static/61a827a1/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/0da8d285-3457-4e5b-af21-b38609b07eea.webp)
Does this break tools that expand these URLs without visiting them?
If so, they’ve just made themselves incredibly attractive to malicious actors since determining what is behind the URL will be harder to do safely and quickly for security controls.
AI is a magical black box that performs a bunch of actions to produce an output. We can’t trust what a developer says the black box does inside without it being completely open source (including weights).
This is a concept for a system where the actions performed can be proved to those who don’t have visibility inside the box to trust the box is doing what it is saying it’s doing.
An AI enemy that can prove it isn’t cheating by providing proof of the actions it took. In theory.
Zero Knowledge Proofs make a lot of sense for cryptography but in a more abstracted sense like this, it still relies on a lot of trust that the implementation generates proofs for all actions.
Whenever I see Web3, I personally lose any faith in whatever is being presented or proposed. To me, blockchain is an impressive solution to no real problem (except perhaps border control / customs).