Windows Events, how to collect them in Sentinel and which way is preferred to detect Incidents.

I started blogging again, but this time the article is on the Microsoft Tech Community platform. I’ve been working with Microsoft Sentinel for 3 years now, from its start as Azure Sentinel. The contents of this blog are the result of my experience delivering migrations to Sentinel from other SIEMs and green-field adoption projects where SOC teams learn and adopt Microsoft Sentinel as their SIEM. On several of these projects the three main ways of ingesting Windows Server OS logs for IaaS architectures is a recurring theme. Even in multi-cloud environments. This article outline the main 3 ways to ingest OS events, which ones are considered Security Events, and the gotchas when utilizing a Windows Event Collector.

https://techcommunity.microsoft.com/t5/fasttrack-for-azure/windows-events-how-to-collect-them-in-sentinel-and-which-way-is/ba-p/3997342

my Azure Security refences

If you’re an Azure Security Ninja learning about Sentinel (Azure cloud native SIEM) here’s a free Azure Sentinel Webminar.

https://onedrive.live.com/?authkey=%21AM3%5FMenNud9f%2DZc&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21314&parId=66C31D2DBF8E0F71%21257&o=OneUp

Here’s a video on Azure Sentinel from Azure Fridays: https://www.youtube.com/watch?v=oiWInLYvnUk

For those of you learning about the Kusto Querying Language here’s a good online free class: https://www.youtube.com/watch?v=EDCBLULjtCM

More on KQL (Kusto Querying Language) and its use in Azure Sentinel:


If you want a full list of all the webminars and the assets/files shared on these webminars, take a look here -> https://techcommunity.microsoft.com/t5/microsoft-security-and/security-community-webinars/ba-p/927888

If you wonder how to integrate App Gateway WAFv1 and ASC:

https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-integration-security-center

If you want to learn more about Logic Apps to use them in one of our Security Services, star here ->

https://docs.microsoft.com/en-us/azure/logic-apps/quickstart-create-first-logic-app-workflow

Azure Fridays Azure Security Center video:

Use of Logic Apps:

Enjoy!

OWASP API Top 10 Most Critical Security Risks in 2019

A lot of us have been waiting for OWASP to publish a new set of top 10. Now that APIs are key in every solution, OWASP published the Top 10 Most Critical API Security Risks! It is worth the time to read it through. You don’t want to be in the news for the wrong reasons.  #infosecurity #applicationsecurity 

https://owasp.org/www-project-api-security/

Digital signatures and certificate expiration dates

Why sign your code binaries or documents?

As a software publisher, there are two reasons to sign your code:

  1. To prove its Integrity
  2. To develop its Reputation

Best Practice: Time-stamping.

When signing code, as a publisher, you have the opportunity to timestamp your code. Time-stamping adds a cryptographically-verifiable timestamp to your signature, proving when the code was signed. If you do not timestamp your code, the signature can be treated as invalid upon the expiration of your digital certificate. Since it would probably be cumbersome to re-sign every package you’ve shipped when your certificate expires, you should take advantage of time-stamping. A signed, time-stamped package remains valid indefinitely, so long as the timestamp marks the package as having been signed during the validity period of the certificate

If this is the case, it should be safe to distribute the file (or execute the binaries) even after the original certificate used to generate the digital signature, expired. If the signature was not timestamped, then there is a risk that the file/executable will not pass verifications.

 

When a file is digitally signed and timestamped, the signature will not expire when the certificate expires. The public key accompanying the executable file will still be valid after the signing certificate expires.

This is how a file that is digitally signed and timestamped will look like in a  Windows Client OS (properties of the file):

Digital_Signature_Image
Example of the properties of a file that was digitally signed and timestamped.

Signing is there to prove that the assembly or file is created by a known person or company, and not changed by anyone else.

When signing an assembly, a hash is created and signed with the private key of the certificate private/public key pairs. That private key is not distributed with the assembly, it stays with the publisher, in this case Microsoft.

The public key of the certificate is, which is what you showed us, distributed with the executable or assembly. To verify that the assembly has not been tampered, you can calculate the hash using the same hashing algorithm specified on the properties of the assembly, for instance SHA1, and compare this hash with the encrypted hash that the assembly signature has, the latter hash should be decrypted with the certificate public key that is distributed with the assembly. If the hashes match, the assembly is the original one published by the publisher.

If the assembly (executable) is changed, for instance by malware, it is easy to find out by checking the hash of the assembly and its signature using the public key (which is included in the executable or file).

This is a verification that antivirus programs and anti-malware programs do.

Digital signing and timestamping is a layer of protection; however, it doesn’t 100% guarantee security, a bad publisher can still do the same and distribute files that are digitally signed and timestamped. The use of Certificate Authorities provide an additional level of trust on the publishers.

If the private key of the certificate used to digitally sign files, is compromised, other files and executables could, potentially, be signed on behalf of the publisher by a malicious attacker, hoping to distribute malware. This though, won’t affect files or executables signed before the breach happened when there is a timestamp. In this scenario, the Certificate Revocation Lists will also be essential to report to the world the blacklisted-invalid certificate.

HTTP Secure, Part II. Is Diffie-Hellman always used in the HTTPS key exchange?

The initial post for Part I – HTTP Secure is here.

I got a question right after I had spent a week in training classes for the COMPTIA Security+ exam: to describe how HTTP Secure (HTTPS) modifies the HTTP traffic between a client browser and the server.

At the end of my explanation, this person also asked me what was the role of Diffie-Hellman algorithm in the whole process.
I was almost sure that Diffie-Hellman isn’t used all the time in HTTPS  but I didn’t know how to explain, on the spot, when is Diffie-Hellman actually used. The person asking me the question was part of a review board and was adamant that Diffie-Hellman was always used.
I’ll abbreviate Diffie-Hellman as D-H for the remaining of this post.

How is the HTTP traffic between a server and a client modified when the channel is encrypted? What is SSL?

What is the role of the D-H key exchange algorithm?

Is D-H always used in HTTPS?

Those are the questions I’ll try to answer on this post.

How is the HTTP traffic between a server and a client modified when the channel is encrypted?

Similar to an HTTP request, the client begins the process by requesting an HTTPS session. This could be by entering an HTTPS address in the URL or by clicking on an HTTPS hyperlink. Now, when the link must be secured the server responds by sending the server’s certificate. The certificate includes the server’s public key. The matching private key is on the server and only accessible by the server. The client then creates a symmetric key and encrypts it with the server’s public key. The creation of the symmetric session key is what differs in the different versions of SSL and could use D-H in some cases for the generation of the exact same key by the client and the server, so the key is never actually exchanged.  This symmetric key (also called ephemeral key or session key) will be used to encrypt data in the HTTPS session.  When D-H is not used, the client sends the encrypted session key it generated to the web server. This key is encrypted using the server’s public key. Only the server’s private key can decrypt the cypher and obtain the client’s session key. If attackers intercept the encrypted key, they won’t be able to decrypt it since they don’t have access to the server’s private key.  The server receives the encrypted session key and decrypts it with the server’s private key, this is not true when D-H is used though, as the server generates an identical session key as the one that was generated by the client. At this point, both the client and the server know the symmetric/session key. All of the session data  exchanged over the link is encrypted with this symmetric/session key using symmetric encryption.

In the case the server is configured to accept client certificates to authenticate the client, the exchange differs a little from the one described above. Digital certificates provide a way for the clients to trust the server with validations that can be done of the certificate presented by the server, but this subject (client certificates involved on the HTTPS exchange and server authentication using digital certificates) is beyond the scope of this post.

What is SSL?

SSL Secure Sockets Layer (SSL) is an encryption protocol used to encrypt Internet traffic. For example, HTTPS uses SSL in secure web browser sessions. It can also encrypt other transmissions. For example, File Transport Protocol Secure (FTPS) uses SSL to encrypt transmissions. SSL provides certificate-based authentication and encrypts data with a combination of both symmetric and asymmetric encryption during a session. It uses asymmetric encryption to privately share a session key, and symmetric encryption to encrypt data displayed on the web page and transmitted during the session. Netscape created SSL for its web browser and updated it to version SSL 3.0.  The IETF created TLS to standardize improvements with SSL. TLS Transport Layer Security (TLS) is a replacement for SSL and is widely used in many different applications. The IETF has updated and published several TLS documents specifying the standard. TLS 1.0 was based on SSL 3.0 and is referred to as SSL 3.1. Similarly, each update to TLS indicated it was an update to SSL. For example, TLS 1.1 is called SSL 3.2 and TLS 1.2 is called SSL 3.3.

SSL keeps the communication path open until one of the parties requests to end the session. The session is usually ended when the client sends the server a FIN packet, which is an indication to close out the channel. SSL requires an SSL-enabled server and browser. SSL provides security for the connection but does not offer security for the data once received. In the protocol stack, SSL lies beneath the application layer and above the network layer. This ensures SSL is not limited to specific application protocols and can still use the communication transport standards of the Internet. Different books and technical resources place SSL at different layers of the OSI model, which may seem confusing at first. But the OSI model is a conceptual construct that attempts to describe the reality of networking. The SSL protocol works at the transport layer. Although SSL is almost always used with HTTP, it can also be used with other types of protocols. So if you see a common protocol that is followed by an s, that protocol is using SSL to encrypt its data. SSL is currently at version 3.0. Since SSL was developed by Netscape, it is not an open-community protocol. This means the technology community cannot easily extend SSL to interoperate and expand in its functionality. If a protocol is proprietary in nature, as SSL is, the technology community cannot directly change its specifications and functionality. If the protocol is an open-community protocol, then its specifications can be modified by individuals within the community to expand what it can do and what technologies it can work with. So the open-community version of SSL is Transport Layer Security (TLS). The differences between SSL 3.0 and TLS is slight, but TLS is more extensible and is backward compatible with SSL.

The SSL/TLS Handshake Protocol
The most complex part of SSL is the Handshake Protocol. This protocol allows the server and client to authenticate each other and to negotiate an encryption and Message Authentication Code (MAC) algorithm and cryptographic keys to be used to protect data sent in an SSL record. The Handshake Protocol is used before any application data is transmitted. The Handshake Protocol consists of a series of messages exchanged by the client and the server.

The Figure below shows the initial exchange needed to establish a logical connection between the client and the server. The exchange can be viewed as having four phases.

Handshake Protocol in Action 

After sending the client_hello message, the client waits for the server_hello message, which contains the same parameters as the client_hello message. For the server_hello message, the following conventions apply. The Version field contains the lower of the version suggested by the client and the highest version supported by the server. The Random field is generated by the server and is independent of the client’s Random field. If the SessionID field of the client was nonzero, the same value is used by the server; otherwise the server’s SessionID field contains the value for a new session. The CipherSuite field contains the single CipherSuite selected by the server from those proposed by the client. The Compression field contains the compression method selected by the server from those proposed by the client.

What is the role of the D-H key exchange algorithm?

D-H is a key exchange algorithm used to privately share a symmetric key between two parties, it wasn’t deviced in the context of digital certificates and pre-dates them. The Diffie-Hellman scheme was first published in 1976 by Whitfield Diffie and Martin Hellman. The idea of D-H is that it’s easy to compute powers modulo a prime but hard to reverse the process: If someone asks which power of 2 modulo 11 is 7 , you’d have to experiment a bit to answer, even though 11 is a small prime. If you use a huge prime istead, then this becomes a very difficult problem

The D-H algorithm enables two systems to exchange a symmetric key securely without requiring a previous relationship or prior arrangements. The algorithm allows for key distribution, but does not provide encryption or digital signature functionality. The algorithm is based on the difficulty of calculating discrete logarithms in a finite field. The original Diffie-Hellman algorithm is vulnerable to a man-in-the-middle attack, because no authentication occurs before public keys are exchanged.

Is D-H always used in HTTPS?

The answer is NO. In practice, Diffie–Hellman is not used with RSA being the dominant public key algorithm.

The first element of the CipherSuite parameter (see the Handshake Protocol in Action figure above) is the key exchange method. The following key exchange methods are supported on HTTP Secure:

RSA: The secret key is encrypted with the receiver’s RSA public key. A public-key certificate for the receiver’s key must be made available.
Fixed Diffie-Hellman: This a Diffie-Hellman key exchange in which the server’s certificate contains the Diffie-Hellman public parameters signed by the certificate authority (CA). That is, the public-key certificate contains the Diffie-Hellman public-key parameters. The client provides its Diffie-Hellman public key parameters either in a certificate, if client authentication is required, or in a key exchange message. This method results in a fixed secret key between two peers, based on the Diffie-Hellman calculation using the fixed public keys.
Ephemeral Diffie-Hellman: This technique is used to create ephemeral (temporary, one-time) secret keys. In this case, the Diffie-Hellman public keys are exchanged, and signed using the sender’s private RSA or DSS key. The receiver can use the corresponding public key to verify the signature. Certificates are used to authenticate the public keys. This option appears to be the most secure of the three Diffie-Hellman options because it results in a temporary, authenticated key.
Anonymous Diffie-Hellman: The base Diffie-Hellman algorithm is used, with no authentication. That is, each side sends its public Diffie-Hellman parameters to the other, with no authentication. This approach is vulnerable to man-in-the-middle attacks, in which the attacker conducts anonymous Diffie-Hellman exchanges with both parties.

References:

Harris, Shon. CISSP All-in-One Exam Guide, Fifth Edition (Kindle Locations 16125-16152). McGraw-Hill.

SSL: Foundation for Web Security – The Internet Protocol Journal – Volume 1, No. 1

Gibson, Darril. CompTIA Security+: Get Certified Get Ahead: SY0-301 Study Guide 

The Secure Sockets Layer Protocol

HTTP over TLS

Diffie Hellman Key Agreement Method.

Role Based Access Control in ASP.NET MVC

Role Based Access Control in ASP.NET MVC is pretty straight forward. There is also a way to do Claims access control, but the most common way is the authorization of a user based on the roles they have in an organization.

This blog post only explains RBAC using ASP.NET Model-View-Controller framework for web applications.

As a developer, to show or hide action links in a View, depending on the user role you can use the following Razor syntax:

@if (User.IsInRole("Administrator"))
{
...
}

On the Controller class, to avoid access to an action if the user types in the URL directly on the browser, we can annotate the action with the Role check tags.

For example, the following code would limit access to any actions on the AdministrationController to users who are  members of the Administrator group.

[Authorize(Roles = "Administrator")]
public class AdministrationController : Controller
{
}

You can specify multiple roles as a comma separated list;

[Authorize(Roles = "HRManager,Finance")]
public class SalaryController : Controller
{
}

The SalaryController  class above will be only accessible by users who are members of the HRManager role or theFinance role.

If you apply multiple attributes then, a user’s HTTP request, accessing the methods on the controller  must be a member of all the roles specified. The following sample requires that a user must be a member of both the PowerUser and ControlPanelUser role before authorization is granted.

[Authorize(Roles = "PowerUser")]
[Authorize(Roles = "ControlPanelUser")]
public class ControlPanelController : Controller
{
}

You can further limit access by applying additional role authorization attributes at the action level;

[Authorize(Roles = "Administrator, PowerUser")]
public class ControlPanelController : Controller
{
    public ActionResult SetTime()
    {
    }
 
    [Authorize(Roles = "Administrator")]
    public ActionResult ShutDown()
    {
    }
}

In the previous code snippet members of the Administrator role or the PowerUser role can access the controller and the SetTime action, but, only members of the Administrator role can access the ShutDown action.

You can also lock down a controller but allow anonymous, unauthenticated access to individual actions.

[Authorize]
public class ControlPanelController : Controller
{
    public ActionResult SetTime()
    {
    }
 
    [AllowAnonymous]
    public ActionResult Login()
    {
    }
}

There is also a way to use Policies for limiting access, but to keep it simple, since we already have the roles defined, we can use the common RBAC for now until we need something more complex.

When the user requests the URL directly they will get a nasty 401 Unauthorized page from IIS if their request is not Authorized.

The Razor code shown on the first code snippet can be used in the View to show the elements on the View, if the User is part of the Administrators role, but if the requestor (user) is not part of the role, he or she will receive a 401 HTTP Unauthorized response.

We can give them a more friendly page explaining they don’t have permissions to access the resources requested and link them to a request access page.

This is what could be done:

For a 401 you will probably be seeing the standard 401 Unauthorized page, even if you have added 401 to the customerrors section in your web.config. When using IIS and Windows integrated Authentication, the check happens before ASP.NET MVC even sees the request.

By editing the Global.asax file you can redirect to a route created for 401 Unauthorized HTTP response errors, sending the user to the “Unauthorized to see this” View (friendly page). The use case for this scenario would be if someone received a link for a View that requires the user to be authorized but  the user has not completed other steps in the process, such as paperwork needed prior to accessing the secure resource.

In the Global.asax:

void Application_EndRequest(object sender, System.EventArgs e)
{
    // If the user is not authorized to see this page or access this function, send them to the error page.
    if (Response.StatusCode == 401)
    {
        Response.ClearContent();
        Response.RedirectToRoute("ErrorHandler", (RouteTable.Routes["ErrorHandler"] as Route).Defaults);
    }
}

and in the Route.config:

     routes.MapRoute(
               "ErrorHandler",
               "Error/{action}/{errMsg}",
                new { controller = "Error", action = "Unauthorized", errMsg = UrlParameter.Optional }
     );

and in the ErrorController class:

public ViewResult Unauthorized()
{
        //Response.StatusCode = 401; 
        // Do not set this or else you get a redirect loop
        return View();
        //where View is the friendly .cshtml page
}

 

Voila!

Happy coding.

Forms authentication and client caching in ASP.NET 1.1

We got really sad news today, the type of news that makes you have sustained stomachache for a few days.
I’m not going to blog about the way my stomach feels but to remind me that I like what I do and this is not just a job, it’s a pleasure.

The Problem:

On one of our web projects that uses Forms authentication.
After the authentication process we create an encrypted ticket,
create the cookie that will be used by the FormsAuthentication
provider and redirect to the requested page:

Dim authTix As New
FormsAuthenticationTicket(1, UserName, DateTime.Now,
DateTime.Now.AddMinutes(60), isCookiePersistent, UserData)

Dim encryptedTix As String =
FormsAuthentication.Encrypt(authTix)

Dim authCookie As New
HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket)


authCookie.Expires = authTicket.Expiration

Context.Response.Cookies.Add(authCookie)
FormsAuthentication.RedirectFromLoginPage(UserName,
False)

During user log out we clear the session, call the
FormsAuthentication.SignOut() and redirect the user to the login page.

We had, however, an odd behavior. After the user has logged out of the
application, he could, by clicking the back button on the same browser
windows, navigate to the previous pages he opened. These pages were in
the secure area. These pages were not requested to the server, these
requests did not hit the server so I presumed the user was seing cached
pages in the browser.

The Solution:
To use the directive
Context.Response.Cache.SetNoStore()
on all the secure pages that shouldn’t be cached.
For more info on the framework class HttpCacheability go to MSDN

For more info on Cache-Control Headers on HTTP 1.1 go here

The Side Effect Problem:

It seems IE does not store the file in its temporary Internet files folder whenever the server specifies the “no-store” http cache directive ; as a consequence, it cannot feed Acrobat/Excel or whatever external application with the output of your page. if the application has excel or pdf reports on the fly, they will generate an error if the http directive is sent in the response.

http://support.microsoft.com/kb/243717/en-us

The Second Solution:
To remove this cache header whenever you need to send to the client a file to be opened in the browser.