OWASP 20th Anniversary Celebration

Posted on Sep 24, 2021

Below are notes from presentations at the OWASP 20th anniversary conference. I tried to get the main points and takeaways, plus any interesting details. However, they are not particularly readable and more of a personal promemoria.

There were a few different tracks running simultaneously, so it was tough to choose what to watch, and I will definitely go back to check out some presentations I missed.

Keynote

Philippe de Ryck kicked things off with a keynote outlining his vision for a future where security responsibility is shifted away from devs and encapsulated in libraries.

JWT

He talked about how when parsing a JWT, Apache Pulsar used the insecure parse() function instead of the secure parseClaimsJwt(). It turns out that the former allows a none signing algorithm in the JWT, enabling malicious tampering.

How is a developer supposed to look at parse() and know that it is insecure? Do they have to do in depth security research on every single method they call? Ok, they could see from the docs that the none algorithm is permitted and choose a different library. But that library could allow a malformed algorithm such as NoNe, NONe etc. If we encapsulate secure behaviour in the libraries themselves, we absolve the developer of the responsibility to independently apply complex secure coding guidelines.

React

Next, he walked us through some React issues. Here, security issues are a bit more obvious: the dangerouslySetInnerHTML is named to flag the fact that it is insecure. But this is not enough - it will prevent accidental mis-use, but doesn’t actually educate about XSS or its risks and mitigation.

Running semgrep on your codebase is ok, but doesn’t scale well because you end up with tonnes of findings flagged as dangerous, even though they could be safe. Why not use a SafeHtml template that is safe by design and takes the heat off devs, who have to follow not only security guidelines, but also performance, useability, design etc?

Implementation

Of course, this is easier said than done, but there are some examples of this happening already. A post on the Netflix Tech Blog goes into the philosophy of creating a ‘paved road’ of security - one single solution that covers all user requirements in a centralised and secure way. In this case, they replaced reams of complicated flow charts and security documentation that devs had to follow with a single firewall that took care of everything securely.

Personally, I like this vision and one thing jumped out at me - avoiding repetition of work. I personally find it frustrating when I have to repeat the same process in multiple places, and the thought of many teams doing just that is dreadful to me. Want to refine how JWTs are parsed? Ok, make changes in the 20 different places we parse JWTs! …no thanks. Much better to make a change to a yaml file and have the change pushed to production in a single point, covering the whole application. Less room for human error, too.

How security, dev & testing can work together to stop the same recurring vulns appearing in the top 10

Stefania Chaplin gave a really informative talk about embedding security in every stage of DevOps. She also focused on changing culture - removing the culture of fear, which I totally agree with. If someone is afraid their mistake will cost them their job or reputation, it’s less likely to surface and be resolved quickly.

API Security Top 10

Isabelle Mauny’s talk covered the OWASP API Top 10

Cloud breaches are linked to misconfigured apis in 60% of cases (Source).

Why?

  • human error
  • security is considered way too late in the api lifecycle
  • devs have great tools for productivity - automated builds etc, but appsec is not as automated or efficient
  • app architecture has changed - now a lot of controls are client side
  • security is no longer about securing the perimeter, but now it is about protecting data

OWASP API Security Top 10

She covered some key differences between API1 and API5 on the list.

Number 1 on the Top 10 is Broken Object Level Authorization (BOLA):

  • the true fix is fine grained authorisation to resources in every controller layer
  • Oauth scopes are not the solution here as they limit access to an operation and not to a resource
  • Avoid guessable IDs, avoid exposing internal ids
  • mitigate scraping with rate limiting

Number 5 is Broken Function Level Authorization. Prevention:

  • do not mix admin and non admin ops in the same api
  • avoid naming endpoints so they’re easy to discover via dictionary attacks
  • restrict access to admin apis, eg by Mutual TLS or IP range
  • do not rely on the client apps to do it
  • oauth scopes can help here

Logging

Goals:

  • forensics and non repudiation

Need event logs for anything unusual:

  • rejected reuqests
  • critical info needs to get logged at the lowest logging level - production, not debug

Need to record what, when, who called, where (api, machine name, pod name etc)

Recommendations:

  • log early - adding logs to an already written app is a nightmare
  • invest in a shared framework or custom library that everyone uses and which implements the best practises - willl make logging easier and more coherent.

Do not log PII, tokens, API keys (hash these so you can still track them), encryption keys or any sensitive info.

Anything passed as a query parameter will appear in some logs, somewhere.

Proactive Security (not reactive)

  • bad design decisions are hard to undo

Define what we are building -> evaluating the risk

  • who are the consumers?
  • which data do we expose?
  • do we need to sign or encrypt?

Interface design -> what are we exposing to whom?

  • who has access?
  • reduce resources etc.

Call to action - use the top 10 as a framework for designing and testing. Move left with security, hack yourself and automate security.

For every functional test, you should have 10 security tests - sending garbage, expired keys, etc. automatically.

Security in AI

Aaron Ansari spoke about security in AI.

Vulnerabilities

  • injection - prompt injection
  • broken auth - human validation
  • insecure design - security is generally not considered
  • lack of logging and monitoring - tonnes of stuff to be logged, especially in audio etc, and costly due to cloud storage.

Mitigation

  • log all calls etc, even if it is expensive
  • human in the loop - escalates appropriately for fine tuning
  • visibility - see what’s going on inside the engine and fine tune as needed

Link to a model AI governance framework.

Future of ai

AI must be context aware - physical knowledge of how objects behave, social knowledge of how people interact etc.

Mantium offers a human in the loop solution.

key takeaways

  • human in the loop
  • algorithmic protection
  • ai governance frameworks
  • code scanning
  • context is key

20 Years of SQL Injections in the Wild

Informative and entertaining presentation by Or Katz - a man who the universe decided was destined for SQL injection, just by entering his name into a form. Nominative determinism at its finest.

The talk is covered in his blog post. It covers how Akamai optimised their web attack detection rules.

Your code may be secure, but what about your pipeline?

Marcin Szydlowski gave a talk on securing pipelines.

Traditional change management process - change requested, developed, tested and deployed. Now with Agile, a lot of stuff is automated. We can’t rely on manual testing etc anymore, so we rely on a CI/CD pipeline.

How do we ensure security?

  • traditionally relied on manual review, signoff, changes are tested etc
  • agile/devops - code is reviewed to avoid bugs, tools detect security issues, process is fully repeatable and automated, humans don’t have direct or indirect uncontrolled access to sensitive environments. The ability to circumvent any of these steps may impact system’s security posture.

Problem statement: we have become focused on automated security testing that we forgot about the security of the pipeline itself.

Incorrectly scoped config reviews give us only a false sense of security. Eg reviewing only CI tool, ignoring artifact management etc. People don’t really know how to secure pipelines and just say to ‘limit permissions, enable logging’ etc.

Solarwinds attack made people realise that securing the pipeline is extremely important.

SLSA - Supply-chain Levels for Software Artifacts. A security framework from source to service.

What could go wrong?

Lack of basic security hygiene - unpatched systems, default config, poor pasword policy etc.

Jenkins - ‘anyone can do anything’ setting…

lack of branch protection mechanism

A mechanism that allows you to enforce certain rules in your git. Eg. all code must be reviewed before it is merged. This ensures that no single dev can merge code into production.

E.g. Allstar github app.

principle of least privilege not followed

Too broad permissions for regular devs allow them to bypass all the defined security controls. Devs like to be completely free so they can easily fix problems. However, this leads to insecure pipelines.

Basic security hygiene and least priv principle everywehere. Pipeline configs should be stored as a code, version controlled and subject to review. Devs writing the code should not have a possibility to modify pipeline config in an uncontrolled manner. Manual upload to artifact management system or image registry should be strictly controlled. Access to prod envs should be strictly controlled and given only to those necessary.

improper segregation

Improper segregation of access, allowing people access… Always think about the consequences of executing untrusted code.

Segregate projects - one space/instance per project. Create environment (PRD/QA) segregation in CI tool. Reassess execution of untrusted code on PE from security perspective. Limit parts of the code which can be modified by an anonymous user. Consider usage of shared libraries or templates. Rotate secrets.

lack of integrity checks

How do you ensure that what is running in PRD is what went through the pipeline? May result in untrusted artifacts running in your environment. Dev could upload an artifact manually. Provenance - proof that it has come from the right place and has been through the pipeline.