OWASP API Top 10 Most Critical Security Risks in 2019

A lot of us have been waiting for OWASP to publish a new set of top 10. Now that APIs are key in every solution, OWASP published the Top 10 Most Critical API Security Risks! It is worth the time to read it through. You don’t want to be in the news for the wrong reasons.  #infosecurity #applicationsecurity 

https://owasp.org/www-project-api-security/

Java – Spring Security Framework and Azure AD

Yesterday I was wondering if Microsoft support middleware packages for Java to allow the typical resource provider actions in an access_token or id_tokens, similarly to what the OWIN NuGet packages do or the PassportJS libraries for NodeJS. These last two libraries act as middlewares intercepting the HTTP requests. They allow to, programmatically, parse the Authorization headers to extract Bearer tokens, validate the tokens, extract claims from the tokens etc, etc.

The libraries I had found so far, and that I was familiar with, were the MSAL set of libaries and the ADAL set of libraries. These are client side libraries, meant for applications acting or APIs acting as OAuth2 Clients (not as Resource Providers)

Microsoft does not maintain a Java middleware library similar to OWIN or Passport. The development team is using Spring and will use Azure Active Directory as the Identity Provider, they could use Spring Boot Starter for AAD: https://github.com/microsoft/azure-spring-boot/tree/master/azure-spring-boot-starters/azure-active-directory-spring-boot-starter

Spring Boot is a wrapper of Spring Framework libraries packaged with preconfigured components.

They can also work with AAD and Spring Security, but there aren’t many articles out there related to that framework and how to use it when AAD is the IdP providing JWT tokens except for this one:

Article and related links: https://azure.microsoft.com/en-us/blog/spring-security-azure-ad/

There is also this old blog post with some sample code(using spring framework security, however the example is for illustration purposes only, and uses an access_token issued for a SPA client application in order to request access to an API, which is not exaclty the case of the application we’re trying to modernize (multi-page JSP web application)

https://dev.to/azure/using-spring-security-with-azure-active-directory-mga

OAuth2/OIDC Libraries for Java that work with Microsoft IDPs

Microsoft identity providers work with two types of libraries:

  • Client libraries: Native clients (iOS, Android), JavaScript SPA applications,  and servers use client libraries to acquire access tokens for calling a resource such as Microsoft Graph or other APIs.
  • Server middleware libraries: Web apps use server middleware libraries for user sign-in. Web APIs use server middleware libraries to validate tokens that are sent by native clients or by other servers.

Microsoft developers produce and maintains two client Open Source libraries for Java that work with Microsoft identity providers => ADAL and MSAL

They support the industry standards OAuth2.0 and OpenID Connect 1.0

Client Libraries and Microsoft  IDP versions

IDP (Identity Provider) Client Library
AAD v1 ADAL4J
ADFS OAuth/OIDC ADAL4J
AAD v2 MSAL Java (also known as MSAL4J)
AAD B2C MSAL Java (also known as MSAL4J)

MSAL Reference: https://javadoc.io/doc/com.microsoft.azure/msal4j/latest/index.html

MSAL Java Wiki https://github.com/AzureAD/microsoft-authentication-library-for-java/wiki

MSAL Java Project Entry point in GitHub https://github.com/AzureAD/microsoft-authentication-library-for-java

MSAL Java sample applications https://github.com/AzureAD/microsoft-authentication-library-for-java/tree/dev/src/samples

ADAL is an older library, used to communicate with identity providers such as ADFS and older versions of AAD v1 token and authorize endpoints

ADAL Reference https://javadoc.io/doc/com.microsoft.azure/adal4j/latest/index.html

Project source code in GitHub https://github.com/AzureAD/azure-activedirectory-library-for-java

Sample Java application https://github.com/Azure-Samples/active-directory-java-webapp-openidconnect

There is also an oauth2-oidc-sdk for Java that contain the namespaces needed for token deserialization, token validation(s) and processing of claims, which is typically done server side, when the web app or api receives a bearer token in the HTTP(S) Security Authorization Header.

To my knowledge, this SDK is not maintained by Microsoft.

https://www.javadoc.io/doc/com.nimbusds/oauth2-oidc-sdk/latest/index.html

Note: Server side validation of the token, specifically, the decryption of the token digital signature and the comparison of the decrypted hash vs. the calculated hash is critical to ensure the token claims weren’t tampered in transit and that the IdP wasn’t spoofed. There are other token validations, but this one in particular guarantees the integrity of the information and the source of the token.

This needs a good POC!

Managing external identities with AAD B2C tenants – public docs

Azure AD B2C is one of the most fast growing Identity Providers in the world. When this type of tenant was created for social identities and digital citizens, the Microsoft Identity team didn’t anticipate its massive growth. Over 1 billion of users authenticate to their apps, and apps to apis, using this type of Azure directory or tenant. When privacy norms such as GDPR in the European Union and CCPA in California, USA, came about, the flexibility provided by custom policies; the white label customization of the html/css and the Identity Experience Framework, allowed solutions where the end users have more control of their data, including the ability to remove the personal data that applications collect from them (profile information, credentials, user attributes and permissions). Microsoft Graph also provides one of the first RESTful APIs that allow application developers to programatically perform CRUD operations on user accounts and principals associated to applications and services.

This month the Microsoft Identity team has published a number of public articles to guide developers on the automation steps with Azure AD B2C:

Choosing an OAuth2.0 grant flow for your solution.

A colleague of mine that also specializes on Identity Management (IdM) with Identity Providers (IdP) such as AAD, Okta, ADFS, Ping Federate and IdentityServer, published this summary on the 6 different flows encompassed by the OAuth2.0 RFCs.

If you work with the OAuth 2.0 Authorization Framework and the OIDC Authentication Framework, you’ll find the article below very interesting.

One thing to remember when you study the Authorization Grants or Flows in OAuth2.0 is that neither the Extension Grant flow nor the Client Credentials flow require the end user to interact with a browser.

The Client Credentials Flow is designed for headless daemons/services where the is no human interaction. The Extension Grant Flow is sometimes named as OBO (on-Behalf-of) and can be used to exchange SAML2.0 tokens for OAuth2.0 access tokens/bearer tokens. This grant (Extension Grant) doesn’t necessarily require user interaction during the token exchange at the IdP for an authorized application if the token being exchanged is still valid. (OAuth2.0 Extension Grant) In this case the SAML2.0 bearer token with its assertions is exchanged for an access token/bearer token/JWT that represents the client application or service.

Logarithms

Mi niece is in middle school. Over FaceTime today she mentioned she was taking logarithm lessons in math. I was so happy she found them interesting!

She did ask though, “I don’t think I will ever use them though. Why do I need to know what a logarithm is to go buy groceries, for instance?”

I rolled my eyes and told her… well, if you go to the store and find a packaged meal that has a pH of 4 and the next similar meal has a pH of 3, you should know that’s a logarithmic scale with base 10 and the second meal is 10 times more acidic than the first one.

If you go to a rock concert and and the sound engineer is asked to pump the volume up and he increases 10dB, on the scale, that is equivalent to a 10-fold increase in sound intensity (which broadly corresponds with a doubling in loudness). Decibels (sound intensity) is also a logarithmic scale. You ears respond to sound in a logarithmic manner.

If you hear in the news a possible earthquake and they give the intensity in the Richter scale, you should know that, that is also a logarithmic scale. If they announce stronger aftershocks, you should know the Richter scale actually corresponds to the square root of 1000, which is a 31.6 times increase in the energy on the quake. If they announce a Category 5 earthquake and then a 6 one, that one will be 31.6 times stronger… and please, please, please, go to safety…

Your aunt!

Digital signatures and certificate expiration dates

Why sign your code binaries or documents?

As a software publisher, there are two reasons to sign your code:

  1. To prove its Integrity
  2. To develop its Reputation

Best Practice: Time-stamping.

When signing code, as a publisher, you have the opportunity to timestamp your code. Time-stamping adds a cryptographically-verifiable timestamp to your signature, proving when the code was signed. If you do not timestamp your code, the signature can be treated as invalid upon the expiration of your digital certificate. Since it would probably be cumbersome to re-sign every package you’ve shipped when your certificate expires, you should take advantage of time-stamping. A signed, time-stamped package remains valid indefinitely, so long as the timestamp marks the package as having been signed during the validity period of the certificate

If this is the case, it should be safe to distribute the file (or execute the binaries) even after the original certificate used to generate the digital signature, expired. If the signature was not timestamped, then there is a risk that the file/executable will not pass verifications.

 

When a file is digitally signed and timestamped, the signature will not expire when the certificate expires. The public key accompanying the executable file will still be valid after the signing certificate expires.

This is how a file that is digitally signed and timestamped will look like in a  Windows Client OS (properties of the file):

Digital_Signature_Image
Example of the properties of a file that was digitally signed and timestamped.

Signing is there to prove that the assembly or file is created by a known person or company, and not changed by anyone else.

When signing an assembly, a hash is created and signed with the private key of the certificate private/public key pairs. That private key is not distributed with the assembly, it stays with the publisher, in this case Microsoft.

The public key of the certificate is, which is what you showed us, distributed with the executable or assembly. To verify that the assembly has not been tampered, you can calculate the hash using the same hashing algorithm specified on the properties of the assembly, for instance SHA1, and compare this hash with the encrypted hash that the assembly signature has, the latter hash should be decrypted with the certificate public key that is distributed with the assembly. If the hashes match, the assembly is the original one published by the publisher.

If the assembly (executable) is changed, for instance by malware, it is easy to find out by checking the hash of the assembly and its signature using the public key (which is included in the executable or file).

This is a verification that antivirus programs and anti-malware programs do.

Digital signing and timestamping is a layer of protection; however, it doesn’t 100% guarantee security, a bad publisher can still do the same and distribute files that are digitally signed and timestamped. The use of Certificate Authorities provide an additional level of trust on the publishers.

If the private key of the certificate used to digitally sign files, is compromised, other files and executables could, potentially, be signed on behalf of the publisher by a malicious attacker, hoping to distribute malware. This though, won’t affect files or executables signed before the breach happened when there is a timestamp. In this scenario, the Certificate Revocation Lists will also be essential to report to the world the blacklisted-invalid certificate.

IntelliTrace and the ‘magic’ of Historical Debugging

I love my job, I get to spread the good news on how to work efficiently with source code and solve very real-life problems.
I recently visited a development team in Nevada that was eager to learn more about Visual Studio debugging tools and the C# compiler Open Source project in GitHub, named ‘Roslyn’.
The highlight of the sessions was the ‘discovery’ of IntelliTrace and how they could use this feature in improving the communication between the development team in Nevada and the QA team at another location. A few hard to reproduce bugs had been filed recently, one of them being intermittent without a consistent set of steps to reproduce. This team was using process dumps and WinDbg to try and pinpoint the cause, but, even though process dumps have their reason of being, the size of the dump files made the quest to search for a root cause quite difficult.

This is until they tried IntelliTrace in their QA environments.

IntelliTrace is similar to a flight recorder. It records every event the airplane goes through from take off to landing. IntelliTrace had its debut in Visual Studio 2010 Ultimate and is now here to stay.

Continue reading IntelliTrace and the ‘magic’ of Historical Debugging

PowerShell Profiling

As part of my job I help developers take a closer look at the source code and analyze it under the “microscope”. Part of this analysis is profiling the performance of different components on a solution for CPU usage, Network usage, IO and Memory usage. Trying to pinpoint areas of the code that consume the resources and see if there can be optimizations. This is what is known as profiling an application or a solution.

Visual Studio 2017, Community, Professional and Enterprise editions, all offer profiling and performance analysis tools. They cover a variety of languages,  and types of targets to be profiled. The image below shows the different profiling targets that can be analyzed with the Performance Profiler.

Performance Profiler in VS 2017

In the world of DevOps, part of the build automations are done using scripting languages, and one of them is PowerShell. After one of the training sessions on performance analysis and profiling with VS 2017, the question was posed:

How can we analyze the performance of PowerShell scripts to determine the areas of the code that consume the most CPU and take the most time to complete?

The main aid that the VS 2017 perf tools offer is the ability to show the source code that takes the most CPU utilization (identifying these sections of the code as “Hot Paths”) and the areas of the code that will place the most objects in the heap without being garbage collected by any of the three garbage collection cycles (memory leaks). VS 2017 profiling tools and diagnostic tools can also analyze multi-threaded applications or applications that use parallel tasks. But how about profiling PowerShell code? How can a similar profiling be done to PowerShell source code to look at CPU and Memory utilization?

Visual Studio 2017 does not offer a specific profiling wizard or GUI for PS to correlate the OS CPU performance counters and the Memory counters with the PowerShell script code.

That being said, you can still profile PowerShell code, it’s not as easy though.

Using PowerShell you can still access the CPU counters and Memory counters available in the operating system.

This can be done using the System.Diagnostic namespace or in versions of PS 3.0 to 6.0 you can use the PS cmdlets in the namespace Microsoft.PowerShell.Diagnostics

https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.diagnostics/?view=powershell-6

You can also use the Windows Management and Instrumentation cmdlets, but the recommended way for profiling a process on remote hosts is to use the WinRM and WSMan protocols (newer protocols) and their associated cmdlets.

These were the only references I’ve seen on the web regarding CPU and Memory analysis of OS processes using PowerShell:

https://stackoverflow.com/questions/25814233/powershell-memory-and-cpu-usage

https://stackoverflow.com/questions/24155726/how-to-get-cpu-usage-memory-consumed-by-particular-process-in-powershell-scrip?noredirect=1&lq=1

https://docs.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/Get-Process?view=powershell-6

https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-wmiobject?view=powershell-5.1&viewFallbackFrom=powershell-6

 

Now, for using the WMI protocol on a host, the WMI windows service needs to be up and running and listening on TCP/IP port 135. WMI is an older protocol built on top of DCOM, and some hosts have this windows service stopped as part of the host hardening.

WinRM is a service based on SOAP messages, it’s a newer protocol for remote management with default HTTP connections listening on TCP/IP ports 5985. If the connection uses transport layer security with digital certificates the default HTTPS port is 5986.

WMI, WinRM and WSMan only work on Windows Servers and Windows Client Operating Systems.

One needs to inject profiling like cmdlets directly into the PowerShell code to find the code hot spots that cause high CPU utilization.  This can work but then one needs to remember to either comment out or delete the direct instrumentation when the PowerShell code is run in the production environment.

If you have profiled your PowerShell automation scripts some other way, we’d love to hear your experience.

 

Happy coding DevOps!