Bugs always present?

I find it fascinating that you say “bugs” will always be written into software. And, that these can be exploited, often to great loss to the unsuspecting. Is this a systemic inevitability? But, surely, it can’t be, code doesn’t write itself, coders do! Are bugs coded to test systemic collapse. In am concerned, if I employ a pro to protect me and my assets, is he coding a hack?

There is a lecture that covers this called Trust & Backdoors.

One approach to avoid bugs is to create none complex system. This isn’t feasible - infact systems are getting more complex which is one of the reasons security is struggling to keep up. Complexity is the nemesis of security.

Another approach is using formal methods in software engineering. Software fundamentally is a mathematical system. Therefore you can prove the correctness of a system through testing and proving properties of that system. This way you can provide complete evidence of correctness. Meaning no matter what inputs the system receives it will always compute the right values.

Unfortunately currently only the most critical software goes through formal methods. Like air transportation and process control systems. Formal process is still too time consuming and cost prohibitive for most systems.

No yes all humans code errors!