5/06/2020

Hacking Everything With RF And Software Defined Radio - Part 3


Reversing Device Signals with RFCrack for Red Teaming


This blog was researched and automated by:
@Ficti0n 
@GarrGhar 
Mostly because someone didn't want to pay for a new clicker that was lost LOL

Websites:
Console Cowboys: http://consolecowboys.com 
CC Labs: http://cclabs.io

CC Labs Github for RFCrack Code:
https://github.com/cclabsInc/RFCrack


Contrived Scenario: 

Bob was tasked to break into XYZ  corporation, so he pulled up the facility on google maps to see what the layout was. He was looking for any possible entry paths into the company headquarters. Online maps showed that the whole facility was surrounded by a security access gate. Not much else could be determined remotely so bob decided to take a drive to the facility and get a closer look. 

Bob parked down the street in view of the entry gate. Upon arrival he noted the gate was un-manned and cars were rolling up to the gate typing in an access code or simply driving up to the gate as it opening automatically.  Interestingly there was some kind of wireless technology in use. 

How do we go from watching a car go through a gate, to having a physical device that opens the gate?  

We will take a look at reversing a signal from an actual gate to program a remote with the proper RF signal.  Learning how to perform these steps manually to get a better understanding of how RF remotes work in conjunction with automating processes with RFCrack. 

Items used in this blog: 

Garage Remote Clicker: https://goo.gl/7fDQ2N
YardStick One: https://goo.gl/wd88sr
RTL SDR: https://goo.gl/B5uUAR


 







Walkthrough Video: 




Remotely sniffing signals for later analysis: 

In the the previous blogs, we sniffed signals and replayed them to perform actions. In this blog we are going to take a look at a signal and reverse it to create a physical device that will act as a replacement for the original device. Depending on the scenario this may be a better approach if you plan to enter the facility off hours when there is no signal to capture or you don't want to look suspicious. 

Recon:

Lets first use the scanning functionality in RFCrack to find known frequencies. We need to understand the frequencies that gates usually use. This way we can set our scanner to a limited number of frequencies to rotate through. The smaller rage of frequencies used will provide a better chance of capturing a signal when a car opens the target gate. This would be beneficial if the scanning device is left unattended within a dropbox created with something like a Kali on a Raspberry Pi. One could access it from a good distance away by setting up a wifi hotspot or cellular connection.

Based on research remotes tend to use 315Mhz, 390Mhz, 433Mhz and a few other frequencies. So in our case we will start up RFCrack on those likely used frequencies and just let it run. We can also look up the FCID of our clicker to see what Frequencies manufactures are using. Although not standardized, similar technologies tend to use similar configurations. Below is from the data sheet located at https://fccid.io/HBW7922/Test-Report/test-report-1755584 which indicates that if this gate is compatible with a universal remote it should be using the 300,310, 315, 372, 390 Frequencies. Most notably the 310, 315 and 390 as the others are only on a couple configurations. 




RFCrack Scanning: 

Since the most used ranges are 310, 315, 390 within our universal clicker, lets set RFCrack scanner to rotate through those and scan for signals.  If a number of cars go through the gate and there are no captures we can adjust the scanner later over our wifi connection from a distance. 

Destroy:RFCrack ficti0n$ python RFCrack.py -k -f 310000000 315000000 390000000
Currently Scanning: 310000000 To cancel hit enter and wait a few seconds

Currently Scanning: 315000000 To cancel hit enter and wait a few seconds

Currently Scanning: 390000000 To cancel hit enter and wait a few seconds

e0000000000104007ffe0000003000001f0fffe0fffc01ff803ff007fe0fffc1fff83fff07ffe0007c00000000000000000000000000000000000000000000e0007f037fe007fc00ff801ff07ffe0fffe1fffc3fff0001f00000000000000000000000000000000000000000000003809f641fff801ff003fe00ffc1fff83fff07ffe0fffc000f80000000000000000000000000000000000000000000003c0bff01bdf003fe007fc00ff83fff07ffe0fffc1fff8001f0000000000000000000000000000000000000000000000380000000000000000002007ac115001fff07ffe0fffc000f8000000000000000000000000000000000000000
Currently Scanning: 433000000 To cancel hit enter and wait a few seconds


Example of logging output: 

From the above output you will see that a frequency was found on 390. However, if you had left this running for a few hours you could easily see all of the output in the log file located in your RFCrack/scanning_logs directory.  For example the following captures were found in the log file in an easily parseable format: 

Destroy:RFCrack ficti0n$ cd scanning_logs/
Destroy:scanning_logs ficti0n$ ls
Dec25_14:58:45.log Dec25_21:17:14.log Jan03_20:12:56.log
Destroy:scanning_logs ficti0n$ cat Dec25_21\:17\:14.log
A signal was found on :390000000
c0000000000000000000000000008000000000000ef801fe003fe0fffc1fff83fff07ffe0007c0000000000000000000000000000000000000000000001c0000000000000000050003fe0fbfc1fffc3fff83fff0003e00000000000000000000000000000000000000000000007c1fff83fff003fe007fc00ff83fff07ffe0fffc1fff8001f00000000000000000000000000000000000000000000003e0fffc1fff801ff003fe007fc1fff83fff07ffe0fffc000f80000000000000000000000000000000000000000000001f07ffe0dffc00ff803ff007fe0fffc1fff83fff07ffe0007c000000000000000000000000000000000000000000
A signal was found on :390000000
e0000000000104007ffe0000003000001f0fffe0fffc01ff803ff007fe0fffc1fff83fff07ffe0007c00000000000000000000000000000000000000000000e0007f037fe007fc00ff801ff07ffe0fffe1fffc3fff0001f00000000000000000000000000000000000000000000003809f641fff801ff003fe00ffc1fff83fff07ffe0fffc000f80000000000000000000000000000000000000000000003c0bff01bdf003fe007fc00ff83fff07ffe0fffc1fff8001f0000000000000000000000000000000000000000000000380000000000000000002007ac115001fff07ffe0fffc000f8000000000000000000000000000000000000000



Analyzing the signal to determine toggle switches: 

Ok sweet, now we have a valid signal which will open the gate. Of course we could just replay this and open the gate, but we are going to create a physical device we can pass along to whoever needs entry regardless if they understand RF. No need to fumble around with a computer and look suspicious.  Also replaying a signal with RFCrack is just to easy, nothing new to learn taking the easy route. 

The first thing we are going to do is graph the capture and take a look at the wave pattern it creates. This can give us a lot of clues that might prove beneficial in figuring out the toggle switch pattern found in remotes. There are a few ways we can do this. If you don't have a yardstick at home you can capture the initial signal with your cheap RTL-SDR dongle as we did in the first RF blog. We could then open it in audacity. This signal is shown below. 



Let RFCrack Plot the Signal For you: 

The other option is let RFCrack help you out by taking a signal from the log output above and let RFCrack plot it for you.  This saves time and allows you to use only one piece of hardware for all of the work.  This can easily be done with the following command: 

Destroy:RFCrack ficti0n$ python RFCrack.py -n -g -u 1f0fffe0fffc01ff803ff007fe0fffc1fff83fff07ffe0007c
-n = No yardstick attached
-g = graph a single signal
-u = Use this piece of data




From the graph output we see 2 distinct crest lengths and some junk at either end we can throw away. These 2 unique crests correspond to our toggle switch positions of up/down giving us the following 2 possible scenarios using a 9 toggle switch remote based on the 9 crests above: 

Possible toggle switch scenarios:

  1. down down up up up down down down down
  2. up up down down down up up up up 

Configuring a remote: 

Proper toggle switch configuration allows us to program a universal remote that sends a signal to the gate. However even with the proper toggle switch configuration the remote has many different signals it sends based on the manufacturer or type of signal.  In order to figure out which configuration the gate is using without physically watching the gate open, we will rely on local signal analysis/comparison.  

Programming a remote is done by clicking the device with the proper toggle switch configuration until the gate opens and the correct manufacturer is configured. Since we don't have access to the gate after capturing the initial signal we will instead compare each signal from he remote to the original captured signal. 


Comparing Signals: 

This can be done a few ways, one way is to use an RTLSDR and capture all of the presses followed by visually comparing the output in audacity. Instead I prefer to use one tool and automate this process with RFCrack so that on each click of the device we can compare a signal with the original capture. Since there are multiple signals sent with each click it will analyze all of them and provide a percent likelihood of match of all the signals in that click followed by a comparing the highest % match graph for visual confirmation. If you are seeing a 80-90% match you should have the correct signal match.  

Note:  Not every click will show output as some clicks will be on different frequencies, these don't matter since our recon confirmed the gate is communicating on 390Mhz. 

In order to analyze the signals in real time you will need to open up your clicker and set the proper toggle switch settings followed by setting up a sniffer and live analysis with RFCrack: 

Open up 2 terminals and use the following commands: 

#Setup a sniffer on 390mhz
  Setup sniffer:      python RFCrack.py -k -c -f 390000000.     
#Monitor the log file, and provide the gates original signal
  Setup Analysis:     python RFCrack.py -c -u 1f0fffe0fffc01ff803ff007fe0fffc1fff83fff07ffe0007c -n.  

Cmd switches used
-k = known frequency
-c = compare mode
-f = frequency
-n = no yardstick needed for analysis

Make sure your remote is configured for one of the possible toggle configurations determined above. In the below example I am using the first configuration, any extra toggles left in the down position: (down down up up up down down down down)




Analyze Your Clicks: 

Now with the two terminals open and running click the reset switch to the bottom left and hold till it flashes. Then keep clicking the left button and viewing the output in the sniffing analysis terminal which will provide the comparisons as graphs are loaded to validate the output.  If you click the device and no output is seen, all that means is that the device is communicating on a frequency which we are not listening on.  We don't care about those signals since they don't pertain to our target. 

At around the 11th click you will see high likelihood of a match and a graph which is near identical. A few click outputs are shown below with the graph from the last output with a 97% match.  It will always graph the highest percentage within a click.  Sometimes there will be blank graphs when the data is wacky and doesn't work so well. This is fine since we don't care about wacky data. 

You will notice the previous clicks did not show even close to a match, so its pretty easy to determine which is the right manufacture and setup for your target gate. Now just click the right hand button on the remote and it should be configured with the gates setup even though you are in another location setting up for your test. 

For Visual of the last signal comparison go to ./imageOutput/LiveComparison.png
----------Start Signals In Press--------------
Percent Chance of Match for press is: 0.05
Percent Chance of Match for press is: 0.14
Percent Chance of Match for press is: 0.14
Percent Chance of Match for press is: 0.12
----------End Signals In Press------------
For Visual of the last signal comparison go to ./imageOutput/LiveComparison.png
----------Start Signals In Press--------------
Percent Chance of Match for press is: 0.14
Percent Chance of Match for press is: 0.20
Percent Chance of Match for press is: 0.19
Percent Chance of Match for press is: 0.25
----------End Signals In Press------------
For Visual of the last signal comparison go to ./imageOutput/LiveComparison.png
----------Start Signals In Press--------------
Percent Chance of Match for press is: 0.93
Percent Chance of Match for press is: 0.93
Percent Chance of Match for press is: 0.97
Percent Chance of Match for press is: 0.90
Percent Chance of Match for press is: 0.88
Percent Chance of Match for press is: 0.44
----------End Signals In Press------------
For Visual of the last signal comparison go to ./imageOutput/LiveComparison.png


Graph Comparison Output for 97% Match: 







Conclusion: 


You have now walked through successfully reversing a toggle switch remote for a security gate. You took a raw signal and created a working device using only a Yardstick and RFCrack.  This was just a quick tutorial on leveraging the skillsets you gained in previous blogs in order to learn how to analyze  RF signals within embedded devices. There are many scenarios these same techniques could assist in.  We also covered a few new features in RF crack regarding logging, graphing and comparing signals.  These are just a few of the features which have been added since the initial release. For more info and other features check the wiki. 
More info

  1. Como Empezar A Hackear
  2. Hacking Games Online
  3. Hacking Etico Libro
  4. Hacker Blanco
  5. Hacking Web
  6. Hacking Kali Linux
  7. Growth Hacking Sean Ellis
  8. Linux Hacking Distro
  9. Certificacion Hacking Etico
  10. El Mejor Hacker
  11. Capture The Flag Hacking
  12. Hacking Books

Zirikatu Tool - Fud Payload Generator Script

Related news

CVE-2020-2655 JSSE Client Authentication Bypass

TLDR: If you are using TLS ClientAuthentication in Java 11 or newer you should patch NOW. There is a trivial bypass.


During our joint research on DTLS state machines, we discovered a really interesting vulnerability (CVE-2020-2655) in the recent versions of Sun JSSE (Java 11, 13). Interestingly, the vulnerability does not only affect DTLS implementations but does also affects the TLS implementation of JSSE in a similar way. The vulnerability allows an attacker to completely bypass client authentication and to authenticate as any user for which it knows the certificate WITHOUT needing to know the private key. If you just want the PoC's, feel free to skip the intro.





DTLS

I guess most readers are very familiar with the traditional TLS handshake which is used in HTTPS on the web.


DTLS is the crayon eating brother of TLS. It was designed to be very similar to TLS, but to provide the necessary changes to run TLS over UDP. DTLS currently exists in 2 versions (DTLS 1.0 and DTLS 1.2), where DTLS 1.0 roughly equals TLS 1.1 and DTLS 1.2 roughly equals TLS 1.2. DTLS 1.3 is currently in the process of being standardized. But what exactly are the differences? If a protocol uses UDP instead of TCP, it can never be sure that all messages it sent were actually received by the other party or that they arrived in the correct order. If we would just run vanilla TLS over UDP, an out of order or dropped message would break the connection (not only during the handshake). DTLS, therefore, includes additional sequence numbers that allow for the detection of out of order handshake messages or dropped packets. The sequence number is transmitted within the record header and is increased by one for each record transmitted. This is different from TLS, where the record sequence number was implicit and not transmitted with each record. The record sequence numbers are especially relevant once records are transmitted encrypted, as they are included in the additional authenticated data or HMAC computation. This allows a receiving party to verify AEAD tags and HMACs even if a packet was dropped on the transport and the counters are "out of sync".
Besides the record sequence numbers, DTLS has additional header fields in each handshake message to ensure that all the handshake messages have been received. The first handshake message a party sends has the message_seq=0 while the next handshake message a party transmits gets the message_seq=1 and so on. This allows a party to check if it has received all previous handshake messages. If, for example, a server received message_seq=2 and message_seq=4 but did not receive message_seq=3, it knows that it does not have all the required messages and is not allowed to proceed with the handshake. After a reasonable amount of time, it should instead periodically retransmit its previous flight of handshake message, to indicate to the opposing party they are still waiting for further handshake messages. This process gets even more complicated by additional fragmentation fields DTLS includes. The MTU (Maximum Transmission Unit) plays a crucial role in UDP as when you send a UDP packet which is bigger than the MTU the IP layer might have to fragment the packet into multiple packets, which will result in failed transmissions if parts of the fragment get lost in the transport. It is therefore desired to have smaller packets in a UDP based protocol. Since TLS records can get quite big (especially the certificate message as it may contain a whole certificate chain), the messages have to support fragmentation. One would assume that the record layer would be ideal for this scenario, as one could detect missing fragments by their record sequence number. The problem is that the protocol wants to support completely optional records, which do not need to be retransmitted if they are lost. This may, for example, be warning alerts or application data records. Also if one party decides to retransmit a message, it is always retransmitted with an increased record sequence number. For example, the first ClientKeyExchange message might have record sequence 2, the message gets dropped, the client decides that it is time to try again and might send it with record sequence 5. This was done as retransmissions are only part of DTLS within the handshake. After the handshake, it is up to the application to deal with dropped or reordered packets. It is therefore not possible to see just from the record sequence number if handshake fragments have been lost. DTLS, therefore, adds additional handshake message fragment information in each handshake message record which contains information about where the following bytes are supposed to be within a handshake message.


If a party has to replay messages, it might also refragment the messages into bits of different (usually smaller) sizes, as dropped packets might indicate that the packets were too big for the MTU). It might, therefore, happen that you already have received parts of the message, get a retransmission which contains some of the parts you already have, while others are completely new to you and you still do not have the complete message. The only option you then have is to retransmit your whole previous flight to indicate that you still have missing fragments. One notable special case in this retransmission fragmentation madness is the ChangecipherSpec message. In TLS, the ChangecipherSpec message is not a handshake message, but a message of the ChangeCipherSpec protocol. It, therefore, does not have a message_sequence. Only the record it is transmitted in has a record sequence number. This is important for applications that have to determine where to insert a ChangeCipherSpec message in the transcript.

As you might see, this whole record sequence, message sequence, 2nd layer of fragmentation, retransmission stuff (I didn't even mention epoch numbers) which is within DTLS, complicates the whole protocol a lot. Imagine being a developer having to implement this correctly and secure...  This also might be a reason why the scientific research community often does not treat DTLS with the same scrutiny as it does with TLS. It gets really annoying really fast...

Client Authentication

In most deployments of TLS only the server authenticates itself. It usually does this by sending an X.509 certificate to the client and then proving that it is in fact in possession of the private key for the certificate. In the case of RSA, this is done implicitly the ability to compute the shared secret (Premaster secret), in case of (EC)DHE this is done by signing the ephemeral public key of the server. The X.509 certificate is transmitted in plaintext and is not confidential. The client usually does not authenticate itself within the TLS handshake, but rather authenticates in the application layer (for example by transmitting a username and password in HTTP). However, TLS also offers the possibility for client authentication during the TLS handshake. In this case, the server sends a CertificateRequest message during its first flight. The client is then supposed to present its X.509 Certificate, followed by its ClientKeyExchange message (containing either the encrypted premaster secret or its ephemeral public key). After that, the client also has to prove to the server that it is in possession of the private key of the transmitted certificate, as the certificate is not confidential and could be copied by a malicious actor. The client does this by sending a CertificateVerify message, which contains a signature over the handshake transcript up to this point, signed with the private key which belongs to the certificate of the client. The handshake then proceeds as usual with a ChangeCipherSpec message (which tells the other party that upcoming messages will be encrypted under the negotiated keys), followed by a Finished message, which assures that the handshake has not been tampered with. The server also sends a CCS and Finished message, and after that handshake is completed and both parties can exchange application data. The same mechanism is also present in DTLS.

But what should a Client do if it does not possess a certificate? According to the RFC, the client is then supposed to send an empty certificate and skip the CertificateVerify message (as it has no key to sign anything with). It is then up to the TLS server to decide what to do with the client. Some TLS servers provide different options in regards to client authentication and differentiate between REQUIRED and WANTED (and NONE). If the server is set to REQUIRED, it will not finish the TLS handshake without client authentication. In the case of WANTED, the handshake is completed and the authentication status is then passed to the application. The application then has to decide how to proceed with this. This can be useful to present an error to a client asking him to present a certificate or insert a smart card into a reader (or the like). In the presented bugs we set the mode to REQUIRED.

State machines

As you might have noticed it is not trivial to decide when a client or server is allowed to receive or send each message. Some messages are optional, some are required, some messages are retransmitted, others are not. How an implementation reacts to which message when is encompassed by its state machine. Some implementations explicitly implement this state machine, while others only do this implicitly by raising errors internally if things happen which should not happen (like setting a master_secret when a master_secret was already set for the epoch). In our research, we looked exactly at the state machines of DTLS implementations using a grey box approach. The details to our approach will be in our upcoming paper (which will probably have another blog post), but what we basically did is carefully craft message flows and observed the behavior of the implementation to construct a mealy machine which models the behavior of the implementation to in- and out of order messages. We then analyzed these mealy machines for unexpected/unwanted/missing edges. The whole process is very similar to the work of Joeri de Ruiter and Erik Poll.


JSSE Bugs

The bugs we are presenting today were present in Java 11 and Java 13 (Oracle and OpenJDK). Older versions were as far as we know not affected. Cryptography in Java is implemented with so-called SecurityProvider. Per default SUN JCE is used to implement cryptography, however, every developer is free to write or add their own security provider and to use them for their cryptographic operations. One common alternative to SUN JCE is BouncyCastle. The whole concept is very similar to OpenSSL's engine concept (if you are familiar with that). Within the JCE exists JSSE - the Java Secure Socket Extension, which is the SSL/TLS part of JCE. The presented attacks were evaluated using SUN JSSE, so the default TLS implementation in Java. JSSE implements TLS and DTLS (added in Java 9). However, DTLS is not trivial to use, as the interface is quite complex and there are not a lot of good examples on how to use it. In the case of DTLS, only the heart of the protocol is implemented, how the data is moved from A to B is left to the developer. We developed a test harness around the SSLEngine.java to be able to speak DTLS with Java. The way JSSE implemented a state machine is quite interesting, as it was completely different from all other analyzed implementations. JSSE uses a producer/consumer architecture to decided on which messages to process. The code is quite complex but worth a look if you are interested in state machines.

So what is the bug we found? The first bug we discovered is that a JSSE DTLS/TLS Server accepts the following message sequence, with client authentication set to required:


JSSE is totally fine with the messages and finishes the handshake although the client does NOT provide a certificate at all (nor a CertificateVerify message). It is even willing to exchange application data with the client. But are we really authenticated with this message flow? Who are we? We did not provide a certificate! The answer is: it depends. Some applications trust that needClientAuth option of the TLS socket works and that the user is *some* authenticated user, which user exactly does not matter or is decided upon other authentication methods. If an application does this - then yes, you are authenticated. We tested this bug with Apache Tomcat and were able to bypass ClientAuthentication if it was activated and configured to use JSSE. However, if the application decides to check the identity of the user after the TLS socket was opened, an exception is thrown:

The reason for this is the following code snippet from within JSSE:


As we did not send a client certificate the value of peerCerts is null, therefore an exception is thrown. Although this bug is already pretty bad, we found an even worse (and weirder) message sequence which completely authenticates a user to a DTLS server (not TLS server though). Consider the following message sequence:

If we send this message sequence the server magically finishes the handshake with us and we are authenticated.

First off: WTF
Second off: WTF!!!111

This message sequence does not make any sense from a TLS/DTLS perspective. It starts off as a "no-authentication" handshake but then weird things happen. Instead of the Finished message, we send a Certificate message, followed by a Finished message, followed by a second(!) CCS message, followed by another Finished message. Somehow this sequence confuses JSSE such that we are authenticated although we didn't even provide proof that we own the private key for the Certificate we transmitted (as we did not send a CertificateVerify message).
So what is happening here? This bug is basically a combination of multiple bugs within JSSE. By starting the flight with a ClientKeyExchange message instead of a Certificate message, we make JSSE believe that the next messages we are supposed to send are ChangeCipherSpec and Finished (basically the first exploit). Since we did not send a Certificate message we are not required to send a CertificateVerify message. After the ClientKeyExchange message, JSSE is looking for a ChangeCipherSpec message followed by an "encrypted handshake message". JSSE assumes that the first encrypted message it receives will be the Finished message. It, therefore, waits for this condition. By sending ChangeCipherSpec and Certificate we are fulfilling this condition. The Certificate message really is an "encrypted handshake message" :). This triggers JSSE to proceed with the processing of received messages, ChangeCipherSpec message is consumed, and then the Certifi... Nope, JSSE notices that this is not a Finished message, so what JSSE does is buffer this message and revert to the previous state as this step has apparently not worked correctly. It then sees the Finished message - this is ok to receive now as we were *somehow* expecting a Finished message, but JSSE thinks that this Finished is out of place, as it reverted the state already to the previous one. So this message gets also buffered. JSSE is still waiting for a ChangeCipherSpec, "encrypted handshake message" - this is what the second ChangeCipherSpec & Finished is for. These messages trigger JSSE to proceed in the processing. It is actually not important that the last message is a Finished message, any handshake message will do the job. Since JSSE thinks that it got all required messages again it continues to process the received messages, but the Certificate and Finished message we sent previously are still in the buffer. The Certificate message is processed (e.g., the client certificate is written to the SSLContext.java). Then the next message in the buffer is processed, which is a Finished message. JSSE processes the Finished message (as it already had checked that it is fine to receive), it checks that the verify data is correct, and then... it stops processing any further messages. The Finished message basically contains a shortcut. Once it is processed we can stop interpreting other messages in the buffer (like the remaining ChangeCipherSpec & "encrypted handshake message"). JSSE thinks that the handshake has finished and sends ChangeCipherSpec Finished itself and with that the handshake is completed and the connection can be used as normal. If the application using JSSE now decides to check the Certificate in the SSLContext, it will see the certificate we presented (with no possibility to check that we did not present a CertificateVerify). The session is completely valid from JSSE's perspective.

Wow.

The bug was quite complex to analyze and is totally unintuitive. If you are still confused - don't worry. You are in good company, I spent almost a whole day analyzing the details... and I am still confused. The main problem why this bug is present is that JSSE did not validate the received message_sequence numbers of incoming handshake message. It basically called receive, sorted the received messages by their message_sequence, and processed the message in the "intended" order, without checking that this is the order they are supposed to be sent in.
For example, for JSSE the following message sequence (Certificate and CertificateVerify are exchanged) is totally fine:

Not sending a Certificate message was fine for JSSE as the REQUIRED setting was not correctly evaluated during the handshake. The consumer/producer architecture of JSSE then allowed us to cleverly bypass all the sanity checks.
But fortunately (for the community) this bypass does not work for TLS. Only the less-used DTLS is vulnerable. And this also makes kind of sense. DTLS has to be much more relaxed in dealing with out of order messages then TLS as UDP packets can get swapped or lost on transport and we still want to buffer messages even if they are out of order. But unfortunately for the community, there is also a bypass for JSSE TLS - and it is really really trivial:

Yep. You can just not send a CertificateVerify (and therefore no signature at all). If there is no signature there is nothing to be validated. From JSSE's perspective, you are completely authenticated. Nothing fancy, no complex message exchanges. Ouch.

PoC

A vulnerable java server can be found _*here*_. The repository includes a pre-built JSSE server and a Dockerfile to run the server in a vulnerable Java version. (If you want, you can also build the server yourself).
You can build the docker images with the following commands:

docker build . -t poc

You can start the server with docker:

docker run -p 4433:4433 poc tls

The server is configured to enforce client authentication and to only accept the client certificate with the SHA-256 Fingerprint: B3EAFA469E167DDC7358CA9B54006932E4A5A654699707F68040F529637ADBC2.

You can change the fingerprint the server accepts to your own certificates like this:

docker run -p 4433:4433 poc tls f7581c9694dea5cd43d010e1925740c72a422ff0ce92d2433a6b4f667945a746

To exploit the described vulnerabilities, you have to send (D)TLS messages in an unconventional order or have to not send specific messages but still compute correct cryptographic operations. To do this, you could either modify a TLS library of your choice to do the job - or instead use our TLS library TLS-Attacker. TLS-Attacker was built to send arbitrary TLS messages with arbitrary content in an arbitrary order - exactly what we need for this kind of attack. We have already written a few times about TLS-Attacker. You can find a general tutorial __here__, but here is the TLDR (for Ubuntu) to get you going.

Now TLS-Attacker should be built successfully and you should have some built .jar files within the apps/ folder.
We can now create a custom workflow as an XML file where we specify the messages we want to transmit:

This workflow trace basically tells TLS-Attacker to send a default ClientHello, wait for a ServerHelloDone message, then send a ClientKeyExchange message for whichever cipher suite the server chose and then follow it up with a ChangeCipherSpec & Finished message. After that TLS-Attacker will just wait for whatever the server sent. The last action prints the (eventually) transmitted application data into the console. You can execute this WorkflowTrace with the TLS-Client.jar:

java -jar TLS-Client.jar -connect localhost:4433 -workflow_input exploit1.xml

With a vulnerable server the result should look something like this:

and from TLS-Attackers perspective:

As mentioned earlier, if the server is trying to access the certificate, it throws an SSLPeerUnverifiedException. However, if the server does not - it is completely fine exchanging application data.
We can now also run the second exploit against the TLS server (not the one against DTLS). For this case I just simply also send the certificate of a valid client to the server (without knowing the private key). The modified WorkflowTrace looks like this:

Your output should now look like this:

As you can see, when accessing the certificate, no exception is thrown and everything works as if we would have the private key. Yep, it is that simple.
To test the DTLS specific vulnerability we need a vulnerable DTLS-Server:

docker run -p 4434:4433/udp poc:latest dtls

A WorkflowTrace which exploits the DTLS specific vulnerability would look like this:

To execute the handshake we now need to tell TLS-Attacker additionally to use UDP instead of TCP and DTLS instead of TLS:

java -jar TLS-Client.jar -connect localhost:4434 -workflow_input exploit2.xml -transport_handler_type UDP -version DTLS12

Resulting in the following handshake:

As you can see, we can exchange ApplicationData as an authenticated user. The server actually sends the ChangeCipherSpec,Finished messages twice - to avoid retransmissions from the client in case his ChangeCipherSpec,Finished is lost in transit (this is done on purpose).


Conclusion

These bugs are quite fatal for client authentication. The vulnerability got CVSS:4.8 as it is "hard to exploit" apparently. It's hard to estimate the impact of the vulnerability as client authentication is often done in internal networks, on unusual ports or in smart-card setups. If you want to know more about how we found these vulnerabilities you sadly have to wait for our research paper. Until then ~:)

Credits

Paul Fiterau Brostean (@PaulTheGreatest) (Uppsala University)
Robert Merget (@ic0nz1) (Ruhr University Bochum)
Juraj Somorovsky (@jurajsomorovsky) (Ruhr University Bochum)
Kostis Sagonas (Uppsala University)
Bengt Jonsson (Uppsala University)
Joeri de Ruiter (@cypherpunknl)  (SIDN Labs)

 

 Responsible Disclosure

We reported our vulnerabilities to Oracle in September 2019. The patch for these issues was released on 14.01.2020.

More info


  1. Hacking Ético Curso
  2. Reddit Hacking
  3. Rom Hacking
  4. Drupal Hacking
  5. Hacker Profesional
  6. Computer Hacking
  7. Hacking Python
  8. Hacking Apps
  9. Android Hacking
  10. Hacking Meaning
  11. Linux Hacking
  12. Penetration Testing A Hands-On Introduction To Hacking
  13. Que Es Growth Hacking
  14. Live Hacking
  15. Google Hacking
  16. Tutoriales Hacking

CORS Misconfigurations On A Large Scale

Inspired by James Kettle's great OWASP AppSec Europe talk on CORS misconfigurations, we decided to fiddle around with CORS security issues a bit. We were curious how many websites out there are actually vulnerable because of dynamically generated or misconfigured CORS headers.

The issue: CORS misconfiguration

Cross-Origin Resource Sharing (CORS) is a technique to punch holes into the Same-Origin Policy (SOP) – on purpose. It enables web servers to explicitly allow cross-site access to a certain resource by returning an Access-Control-Allow-Origin (ACAO) header. Sometimes, the value is even dynamically generated based on user-input such as the Origin header send by the browser. If misconfigured, an unintended website can access the resource. Furthermore, if the Access-Control-Allow-Credentials (ACAC) server header is set, an attacker can potentially leak sensitive information from a logged in user – which is almost as bad as XSS on the actual website. Below is a list of CORS misconfigurations which can potentially be exploited. For more technical details on the issues read the this fine blogpost.

Misconfiguation Description
Developer backdoorInsecure developer/debug origins like JSFiddler CodePen are allowed to access the resource
Origin reflectionThe origin is simply echoed in ACAO header, any site is allowed to access the resource
Null misconfigurationAny site is allowed access by forcing the null origin via a sandboxed iframe
Pre-domain wildcardnotdomain.com is allowed access, which can simply be registered by the attacker
Post-domain wildcarddomain.com.evil.com is allowed access, can be simply be set up by the attacker
Subdomains allowedsub.domain.com allowed access, exploitable if the attacker finds XSS in any subdomain
Non-SSL sites allowedAn HTTP origin is allowed access to a HTTPS resource, allows MitM to break encryption
Invalid CORS headerWrong use of wildcard or multiple origins,not a security problem but should be fixed

The tool: CORStest

Testing for such vulnerabilities can easily be done with curl(1). To support some more options like, for example, parallelization we wrote CORStest, a simple Python based CORS misconfiguration checker. It takes a text file containing a list of domain names or URLs to check for misconfigurations as input and supports some further options:

usage: corstest.py [arguments] infile

positional arguments:
infile File with domain or URL list

optional arguments:
-h, --help show this help message and exit
-c name=value Send cookie with all requests
-p processes multiprocessing (default: 32)
-s always force ssl/tls requests
-q quiet, allow-credentials only
-v produce a more verbose output

CORStest can detect potential vulnerabilities by sending various Origin request headers and checking for the Access-Control-Allow-Origin response. An example for those of the Alexa top 750 websites which allow credentials for CORS requests is given below.

Evaluation with Alexa top 1 Million websites

To evaluate – on a larger scale – how many sites actually have wide-open CORS configurations we did run CORStest on the Alexa top 1 million sites:

$ git clone https://github.com/RUB-NDS/CORStest.git && cd cors/
$ wget -q http://s3.amazonaws.com/alexa-static/top-1m.csv.zip
$ unzip top-1m.csv.zip
$ awk -F, '{print $2}' top-1m.csv > alexa.txt
$ ./corstest.py alexa.txt

This test took about 14 hours on a decent connection and revealed the following results:

Only 29,514 websites (about 3%) actually supported CORS on their main page (aka. responded with Access-Control-Allow-Origin). Of course, many sites such as Google do only enable CORS headers for certain resources, not directly on their landing page. We could have crawled all websites (including subdomains) and fed the input to CORStest. However, this would have taken a long time and for statistics, our quick & dirty approach should still be fine. Furthermore it must be noted that the test was only performed with GET requests (without any CORS preflight) to the http:// version of websites (with redirects followed). Note that just because a website, for example, reflects the origin header it is not necessarily vulnerable. The context matters; such a configuration can be totally fine for a public sites or API endpoints intended to be accessible by everyone. It can be disastrous for payment sites or social media platforms. Furthermore, to be actually exploitable the Access-Control-Allow-Credentials: true (ACAC) header must be set. Therefore we repeated the test, this time limited to sites that return this header (see CORStest -q flag):

$ ./corstest.py -q alexa.txt

This revealed even worse results - almost half of the websites supporting ACAO and ACAC headers contained a CORS misconfigurations that could be exploited directly by a web attacker (developer backdoor, origin reflection, null misconfig, pre-/post-domain wildcard):

The Impact: SOP/SSL bypass on payment and taxpayer sites

Note that not all tested websites actually were exploitable. Some contained only public data and some others - such as Bitbucket - had CORS enabled for their main page but not for subpages containing user data. Manually testing the sites, we found to be vulnerable:
  • A dozen of online banking, bitcoin and other payment sites; one of them allowed us to create a test account so we were able to write proof-of-concept code which could actually have been used to steal money
  • Hundred of online shops/e-commerce sites and a bunch of hotel/flight booking sites
  • Various social networks and misc sites which allow users to log in and communicate
  • One US state's tax filing website (however, this one was exploitable by a MitM only)
We informed all sites we manually tested and found to be vulnerable. A simple exploit code example when logged into a website with CORS origin reflection is given below.


The Reason: Copy & Paste and broken frameworks

We were further interested in reasons for CORS misconfigurations. Particularly we wanted to learn if there is a correlation between applied technology and misconfiguration. Therefore we used WhatWeb to fingerprint the web technologies for all vulnerable sites. CORS is usually enabled either directly in the HTTP server configuration or by the web application/framework. While we could not identify a single major cause for CORS misconfigurations, we found various potential reasons. A majority of dangerous Access-Control-* headers had probably been introduced by developers, others however are based on bugs and bad practices in some products. Insights follow:
  • Various websites return invalid CORS headers; besides wrong use of wildcards such as *.domain.com, ACAO headers which contain multiple origins can often be found; Other examples of invalid - but quite creative - ACAO values we observed are: self, true, false, undefined, None, 0, (null), domain, origin, SAMEORIGIN
  • Rack::Cors, the de facto standard library to enable CORS for Ruby on Rails maps origins '' or origins '*' into reflecting arbitrary origins; this is dangerous, because developers would think that '' allows nothing and '*' behaves according to the spec: mostly harmless because it cannot be used to make to make 'credentialed' requests; this config error leads to origin reflection with ACAC headers on about a hundred of the tested and vulnerable websites
  • A majority of websites which allow a http origin to CORS access a https resource are run on IIS; this seems to be no bug in IIS itself but rather caused by bad advises found on the Internet
  • nginx is the winner when it comes serving websites with origin reflections; again, this is not an issue of nginx but of dangerous configs copied from "Stackoverflow; same problem for Phusion Passenger
  • The null ACAO value may be based on programming languages that simply return null if no value is given (we haven't found any specific framework though); another explanation is that 'CORS in Action', a popular book on CORS, contains various examples with code such as var originWhitelist = ['null', ...], which could be misinterpreted by developers as safe
  • If CORS is enabled in the crVCL PHP Framework, it adds ACAC and ACAO headers for a configured domain. Unfortunatelly, it also introduces a post-domain and pre-subdomain wildcard vulnerability: sub.domain.com.evil.com
  • All sites that are based on "Solo Build It!" (scam?) respond with: Access-Control-Allow-Origin: http://sbiapps.sitesell.com
  • Some sites have :// or // as fixed ACAO values. How should browsers deal with this? Inconsistent at least! Firefox, Chrome, Safari and Opera allow arbitrary origins while IE and Edge deny all origins.
Continue reading

Advanced Penetration Testing • Hacking The World'S Most Secure Networks Free PDF

Related word