WPA2 “KRACK” – Technical notes

Tuesday, October 17th, 2017 | Jim Cheetham | Comments Off on WPA2 “KRACK” – Technical notes

KRACK (Key Reinstallation Attacks) is an effective attack on the WPA2 802.11i protocol used for protecting WiFi networks, published on October 16 2017 .

Because it is an attack on the protocol itself, every piece of equipment that can communicate over WiFi is affected. The attack must be carried out by a device that is in range of the network; i.e. this is a local attack, not a remote one.

TL;DR

Be WORRIED, but there is no need to PANIC. If there is a PATCH for your device, apply it as soon as possible. Otherwise, worry until there is.

KRACK tricks your wireless devices into resetting their encryption sessions to a known state, after which the attacker can read everything that they do, and can inject their own data into the network (i.e. a Man-in-the-Middle attack). This effectively turns your “private, secure” WPA2 network into a “public, insecure” one.

If you are safe operating your device on a public insecure network (e.g. airport or coffee-shop WiFi), then you will be equally safe operating it on a compromised WPA2 network.

KRACK does NOT steal your WiFi passwords or credentials.

The only effective fix for KRACK is on your client devices. PCs and laptops are likely to be patched quickly, mobile phones much more slowly if at all, and IoT devices are at serious risk.

KRACK References

  • KRACK website, https://www.krackattacks.com/
  • Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2, https://papers.mathyvanhoef.com/ccs2017.pdf
  • CERT CVEs, http://www.kb.cert.org/vuls/id/228519
    • CVE-2017-13077: reinstallation of the pairwise key in the Four-way handshake
    • CVE-2017-13078: reinstallation of the group key in the Four-way handshake
    • CVE-2017-13079: reinstallation of the integrity group key in the Four-way handshake
    • CVE-2017-13080: reinstallation of the group key in the Group Key handshake
    • CVE-2017-13081: reinstallation of the integrity group key in the Group Key handshake
    • CVE-2017-13082: accepting a retransmitted Fast BSS Transition Reassociation Request and reinstalling the pairwise key while processing it
    • CVE-2017-13084: reinstallation of the STK key in the PeerKey handshake
    • CVE-2017-13086: reinstallation of the Tunneled Direct-Link Setup (TDLS) PeerKey (TPK) key in the TDLS handshake
    • CVE-2017-13087: reinstallation of the group key (GTK) when processing a Wireless Network Management (WNM) Sleep Mode Response frame
    • CVE-2017-13088: reinstallation of the integrity group key (IGTK) when processing a Wireless Network Management (WNM) Sleep Mode Response frame

Timeline

In early 2017 the researchers were finishing off another security publication when they realised that part of the OpenBSD network code for WiFi that they were discussing had a potential problem. By July 2017 a wide range of systems had been confirmed with this problem, and the CERT/CC co-ordinated a wider notification to OS and device vendors in late August. The public announcement was made on 16 October 2017.

Many vendors have made announcements and released patches already, more will be coming soon. OpenBSD patched early due to their relationship to the original discovery, some other vendors seem to have issued patches already but many important ones are yet to patch.

Patches

At the moment I’m getting my information from the CERT/CC and the Bleeping Computer website, but I’ll verify from original sources as soon as I can. https://www.bleepingcomputer.com/news/security/list-of-firmware-and-driver-updates-for-krack-wpa2-vulnerability/

No Patches

If you have a device using WiFi, and there are no patches for it, you should assume that all traffic from that device can be spied on and potentially altered. If you are encrypting your communications with TLS/SSL or something equivalent like OpenSSH, then all you are at risk from is a lack of privacy. However, you might need to consider implementing a VPN if you rely on plaintext or easily spoofed protocols.

Further Questions

If you have any further questions, please get in touch with the Information Security Office through the usual channels.

 

Email “Virus” Outage Incident Report

Tuesday, March 7th, 2017 | Jim Cheetham | Comments Off on Email “Virus” Outage Incident Report

Summary

On Thursday 2 March 2017, email to and from the Internet between approximately midnight and 7am was being incorrectly classified as containing a virus, and this caused some messages to be permanently lost. Inbound email was described as having been quarantined, but this was not correct; the original messages had not been preserved.

Between 7am and midday on the 2nd, the email service was effectively shut down for investigation and repair. By midday, all services had been restored. All email sent from 7am onwards would eventually be delivered normally.

Although not yet officially confirmed by the vendor, the cause of the problem was a corrupt or absent antivirus update to the edge email servers.

Timeline

Thursday 2 March 2017

  • Midnight to 1am : Inbound email is increasingly being marked as [PMX:VIRUS] and notification versions of the originals are being delivered to end-users.
  • 2:30am : outbound email is now being marked as infected, and is rejected (i.e. the senders are being notified that their messages are not being sent out).
  • 6:30am : The Information Security Office becomes aware of the issue, and halts all of the inbound and outbound email services in order to investigate.
  • 7:15am : Vendor documentation describes the error that is being seen, but the recommended fix does not work.
  • 8:10am : Our external support partner pro-actively contacts ISO to inform them that there is a current issue affecting multiple customers globally.
  • 8:30am : First ITS Service Notice published – updated with current information at 10:30, 11:30, 12:30 and 3:30
  • 10:00am : Announcement “Email delivery issues” emailed to all-depts@ and CITSP@
  • 10:20am : Outbound email services are restored, but only by disabling the normal antivirus checks. This is not a suitable choice for inbound email, however; this remains shut down.
  • 11:50am : Vendor supplies a working update to the antivirus; testing confirms that this fixes the problem properly.
  • 12:15pm : Inbound email services restored. All email sent to us since 6:30am will eventually be delivered normally.
  • 3:20pm : efforts to restore original copies of the incorrectly-marked inbound email are unsuccessful, and are halted. A further announcement “Re: Email delivery issues” is sent to all-depts@

Remediation

We will review the vendor’s incident report when this is published in order to identify any improvements we need in our configuration.

We will investigate the failed quarantine action that caused the mis-categorised email to have been lost.

We will discuss this incident within the context of Disaster Recovery and Business Continuity Plans, to see if any improvements need to be made to these.

Who looks at your data? Evernote, for a start.

Thursday, December 15th, 2016 | Jim Cheetham | Comments Off on Who looks at your data? Evernote, for a start.

Evernote is a great app that helps you create and keep track of notes and synchronise them across the various devices you own – so you can take a photo from your phone, label it, and use it in a document on your PC simply.

It’s also a great example of a “Cloud” service; in order to get that photo from your phone to your PC, it is first copied up to Evernote’s servers, and then your PC copies it down again. You can also access your data directly from a web browser from any computer, if you need it immediately.

However, Evernote does not do anything to encrypt the copy of the data that they store on their own servers. They have a privacy policy to promise to be good, of course … but that’s just changed.

The latest privacy policy goes into effect in January 2017, and as well as the perfectly necessary exceptions for things like court orders and malware incidents, they have now added a clause that says that employees of Evernote will access your data “to maintain and improve the Service”. That’s a very imprecise and broad statement. How will your data be used to improve their service? What is their service? Is it “anything the company does” or only “synchronising your files”?

Here’s a set of articles and longer discussion of some of the issues around this :-

* http://arstechnica.com/tech-policy/2016/12/evernotes-new-privacy-policy-raises-eyebrows/
* http://www.forbes.com/sites/thomasbrewster/2016/12/14/worst-privacy-policy-evernote/
* https://techcrunch.com/2016/12/14/evernotes-new-privacy-policy-allows-employees-to-read-your-notes/

If you are storing data which you believe to be sensitive in any way, you need to be aware of these policies, and when they change. While Cloud-based services offer many conveniences and a low cost to get started, the long-term costs are sometimes unacceptably high.

Remember, “The Cloud” means nothing more or less than “Someone else’s computers”, and there is often no enforceable contract of any kind.

Update:

The CEO of Evernote is now clarifying that the wording of their Policy was misleading; he states that “Human beings don’t read notes without people’s permission. Full stop.”

So, does that mean that you’re all OK to carry on using Evernote, that you can relax and the emergency is over?

You tell me – it’s your data. If you need to control access to your data, and you’re not able to do this completely because “it’s in the cloud” (where the provider changes their terms, conditions, ownership and even physical location without consulting or informing you), then perhaps you should be doing things differently.

https://www.fastcompany.com/3066680/the-future-of-work/evernote-ceo-explains-why-he-reversed-its-new-privacy-policy-we-screwed-u

Checking SHA256 OpenSSH fingerprints

Wednesday, December 7th, 2016 | Jim Cheetham | Comments Off on Checking SHA256 OpenSSH fingerprints

Many people using recent versions of ssh are now seeing SHA256 fingerprints by default when connecting to a new server, and finding it difficult to verify the fingerprint because the server itself doesn’t seem to have the right versions to tell you!

For example, here’s the client trying to connect …

$ ssh galathilion
The authenticity of host 'galathilion (10.30.64.220)' can't be established.
RSA key fingerprint is SHA256:8DpA4frlTxKnZ5GJXkORq8QQlLn4eCx4nZf51g55vYc.

The correct thing to do here is to check this fingerprint, by connecting to the target server over something that isn’t ssh. Then you run the ssh-keygen command to see the fingerprint …

# ssh-keygen -lf /etc/ssh/ssh_host_rsa_key
2048 d3:c6:fa:83:03:f4:ed:44:a4:3e:80:e1:b1:7b:ca:42 /etc/ssh/ssh_host_rsa_key.pub (RSA)

But that’s the wrong format – the MD5 version of the fingerprint, not the SHA256 version. That’s probably because the server version of the openssh tools doesn’t support SHA256 at all. And you can’t work out what the SHA256 fingerprint will be if all you have is the MD5 fingerprint data.

No problem; you can just ask your client ssh to display the server’s fingerprint using the old MD5 presentation :-

$ ssh -o FingerprintHash=md5 galathilion
The authenticity of host 'galathilion (10.30.64.220)' can't be established.
RSA key fingerprint is MD5:d3:c6:fa:83:03:f4:ed:44:a4:3e:80:e1:b1:7b:ca:42.

So that works a treat, and you can validate the connection. Regardless of the scheme used to present the fingerprint to you, it’s the same server public key, so validating the MD5 presentation is the same as validating the SHA256 version.

As an alternative, you can use standard command-line tools to generate the SHA256 fingerprint on the server itself, even though openssh doesn’t do that for you.

# cat /etc/ssh/ssh_host_rsa_key.pub \
  | awk '{print $2}' | base64 -d | sha256sum -b \
  | awk '{print $1}' | xxd -r -p | base64
8DpA4frlTxKnZ5GJXkORq8QQlLn4eCx4nZf51g55vYc=

That mouthful produces the same output as the openssh tool.

Here’s a worked-through example of how this command chain works. I can reproduce the original machine’s data here, because this is a public key. Remember to carefully check what data you are publishing online!

# cat /etc/ssh/ssh_host_rsa_key.pub
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3R3I0dJxyg61jKuAqY3wJ/gwHzzEVg73sVqqJnzzEGWEkpjEYsIBk1NWh/Ur2q9CnR1KPk8Av22fNgeQay6dm9FcGK7TImiD3ZZfZjfHPzwkcoXyQPuJHW9pT8rlktkihdpTNJqlHHibVUz481AntmptypGqPKdg22EjvjrHk5Q4Op/ahZjgkSoFPphH1gWZcCC2xSPi/mk6nu9DF4Jyr1dJq+hJMPuvQ10ozOpzhemUKD9dGoXIh9g78/+M9Y8/naOW+UxZAy8BGrcpjM27sLHU0K+qxLRFw36Xlgur2+lEiSVt0F2iPpbAiJug3hUQTx2K3gkMG36auVsgrWvK9Q==

This file has a single line, with two (or three) values space-separated. The second field is the Base64 representation of the public key itself, which we’re extracting using awk '{print $2}' (although we could have done this with other commands, such as cut -d' ' -f2). Once we have that field, we convert it from Base64 back into raw binary with base64 -d. Then we pass the binary key through sha256sum, which will produce two fields, a hex-encoded fingerprint and the filename (which is just ‘-‘ for standard input), and through awk again to select just the first field. xxd is used to convert the hex-encoded data back to binary again, and finally base64 gives us the same encoding that the openssh tools present.

Why bother with all that? Well, remember that the requirement to verify a server’s ssh fingerprint should not be carried out over ssh itself. I get my servers to write their ssh fingerprints into the /etc/issue file, and this is displayed on the server console by default along with the login prompt. So I can always validate the ssh keys using something that isn’t ssh …

Microsoft’s iOS and Android Outlook app

Wednesday, February 4th, 2015 | Jim Cheetham | Comments Off on Microsoft’s iOS and Android Outlook app

Microsoft have recently released a new “Outlook” email app on the iOS and Android mobile platforms. This app is a rebrand of the recently-purchased Acompli.

The user interface apparently is quite effective, mixing calendar and priority mail and allowing fast response to messages.

Unfortunately, at this stage in the app’s existence it takes some security shortcuts that are not ideal. All your email is copied into “the cloud” (this is a techno-marketing phrase that simply means “someone else’s computer” – and of course we should assume that “the cloud” will always be in a hostile legal environment, where government agencies from multiple countries will have free access to all your data). Worse, if you are accessing an Exchange service (i.e. University mail) your username and password are also stored in the cloud in order to make this work. The app doesn’t make this clear to users, and for some people that could represent a real problem.

More directly, this cloud-based login also actively violates the security policies that the University sets on Exchange email access. In order to protect University-owned data, devices that connect to Exchange are required to have local security policies like active screen locking, and to respond to remote wipe requests when they are reported stolen/missing. The current Outlook app does not apply these policies to the devices that use it, and although remote wipe might correctly remove data copied into the cloud, it doesn’t remove anything from the missing device. Worse, if you have multiple devices using this app, we can no longer wipe just the missing one; this app services them all from the same connection, and therefore a wipe affects all of them at the same time.

There has been a lot of press about this Outlook app recently – from the usability point of view it’s all positive, and from the security point of view it is all negative. Hopefully Microsoft will be able to put in some new development resources to help address these problems soon.

In the meantime, ISO recommend that you do NOT use this app with University email services.

Phone-to-email Spam

Wednesday, November 26th, 2014 | Jim Cheetham | Comments Off on Phone-to-email Spam

Can I send you an email?

In the last few months there has been a rise in “pretexting” phone calls from legitimate marketing organisations, probably in response to anti-spam legislation around the world.

The usual script is an unsolicited phone call from a real live human, asking for permission to use your email address in order to send you some marketing material, usually described as a “White Paper”.

The calls are often made over a low-quality connection (i.e. cheap VoIP) and come from non-native English speakers (a kind way to suggest “offshore call centres”). However, they do generally respond well to a polite “No thanks” as an answer, and to requests to not be called again in the future. If permission is given the eventual email usually represents a legitimate trading company of some sort.

All in all, no real problem.

I have a business opportunity for you!

However, we’re beginning to hear of the same approach being used by spammers, particularly of the advance-fee fraud variety. A small amount of research (i.e. get your name and job title from a website), a hacked VoIP system (which lets them call anywhere in the world for free), and a fresh email address from one of the big free webmail providers potentially gives these criminals a much more direct line to your mailbox and your attention.

This is a particular worry because it won’t be long before these techniques are used for distributing fresh malware – you receive a difficult-to-understand phone call from someone with urgent information to send to you, and a couple of minutes later in comes the email, along with a juicy PDF attachment. Would you resist the temptation to click? How can you tell the difference between an attack, and a real foreign student or academic trying to work with the University?

(Some of you reading this might suddenly realise that you already open too many attachments without stopping to check fully the source!)

What should you do?

The best defence if you are unsure would be to check with your colleagues, see what they think; to check with your IT support; or to ask the ITS Information Security Office for an opinion.

If you don’t have an opportunity to get a second opinion, you have a few technical opportunities to reduce the risk. Firstly, wait a while … come back to the message in a couple of hours time. If the message came out of hours, just don’t open it until you are back at work. Remember that just because it seems to be urgent to someone else it is not necessarily urgent to the University!

Instead of just opening the attachment, ask your anti-virus software to scan it. This is best combined with the “wait a while” approach – if this is a new malware sample (there are tens of thousands per day automatically created), give your AV software time to get an update from the vendor.

Finally, open the attachment in an unusual program. For example, malware PDFs often only successfully attack Adobe Acrobat Reader, so if you have a different PDF reader available you could use that. Instead of opening untrusted attachments with Microsoft Word, open it with a copy of Libre Office. If your job role means that you will be receiving unsolicited attachments regularly, get your IT support to help you install these alternatives.

Finally, if you are in any doubt, leave the file alone and refer the whole thing to the Information Security Office. We can check a lot more details to find out what is going on, and we really don’t mind being asked.

TrueCrypt & file encryption

Thursday, June 26th, 2014 | Jim Cheetham | Comments Off on TrueCrypt & file encryption

TrueCrypt is dead

We used to recommend TrueCrypt as an effective file encryption solution, suitable for exchanging data sets over untrusted networks as well as for medium-term offline storage or backups.

Unfortunately, over the last few weeks it has become clear that the TrueCrypt authors have withdrawn their support for the product; and while the source code is available (and is actively being audited), it is not Open Source licensed, and should not be used in the future. TrueCrypt is effectively dead.

What should I do?

What does this mean for people who are currently using TrueCrypt? I’d recommend that you migrate your data out of TrueCrypt and into some other format; not in a rush, because there are no currently-known attacks or vulnerabilities in the product, but in a well-planned way. You should not start any new storage schemes using TrueCrypt.

What alternatives are there?

There doesn’t seem to be any useable and “free” software that does everything that TrueCrypt did, but most people we talk to don’t actually need all of those features at the same time anyway.

We are currently recommending the 7z archive format with AES encyption as a solution to :-

  • Cross-platform support
  • Protection in transit (email, dropbox, etc); sharing
  • Medium-term storage on untrusted media

Please be aware that University-owned data should always be accessible by the University itself; so if the only copy of your data is encrypted in this way, the passphrase used as the key needs to be made (securely) available to the appropriate people (usually your employment line management).

7z?

7z is the file format originally implemented by the Open Source 7-Zip file archiver, it is publicly described and there are now multiple software implementations available. It is currently regarded as the ‘best’ performing compression software available. Read more on the Wikipedia entry. Command-line users might like the p7zip implementation, packaged in Debian and the EPEL repository for RedHat.

7z applications usually do not use encryption by default; make sure that you select this option for secure storage.

 

Down or Not?

Wednesday, October 10th, 2012 | Jim Cheetham | Comments Off on Down or Not?

Here, this looks like a good joke … downornot.com is down!

Except of course, it isn’t down …

Or perhaps it is …

So, is DownorNot.com down, or not?

The problem here is that the meaning of “it” in the phrase “is it up or down?” is different each time. For the human, “it” is the service provided by the website, and that service is certainly non-functional (it looks like the website’s usage of the Google AppEngine service has gone over quota; and the developer who wrote their website didn’t anticipate this error. In this case, we would also anticipate that the service behind the website is also broken at the moment, whereas often a service and a website are separate). For isitup.org, “it” seems to be the initial contact with the site’s webserver, which is responsible for handling the HTTP conversation. For downforeveryoneorjustforme.com, “it” is the same thing, the webserver and not the application, but they are going a couple of steps further, following the redirections and noticing that the final page is delivered with an error status … and in this case, the failure of the application is reflected in the webserver status code. It doesn’t have to be.

Let’s ask isitup.org again, but this time go straight to www.downornot.com instead of just downornot.com …

So, if you are monitoring or testing a service, make very very sure that you understand what the question “Is it up?” is supposed to mean — and this depends on who is asking. Then, make very sure that you understand how your monitoring tools work, what you are asking them to do, and how to interpret them. Obviously, this isn’t as easy as it sounds …

If you are not careful, this is what you end up with :-

The future of computer threat response

Thursday, August 23rd, 2012 | Jim Cheetham | Comments Off on The future of computer threat response

Dan Geer is a voice in the IT Security world that should be listened to, and he has co-authored a short but intense article in the IEEE S&P Cleartext column that addresses the reality of the rapidly-changing threats that we all face.

Stand Your Ground” has a few key messages, which I’ll try to summarise :-

  • Minimise the number of targets; stop adding new services without effective defences, remove old services. Use the savings to fund better security for what remains.
  • Distrust the internal network; distrust any service that is not continually verified. Defend against outbound traffic as well as inbound traffic.
  • Do not assume perfection is possible; plan for failure modes that reduce services sensibly. Reduce the time-to-repair with automation instead of extending the mean-time-between-failure.

For more reading and a less technical take on these ideas, here’s another article about Dan’s thoughts from Ben Tomhave, a GRC (Governance, Risk & Compliance) consultant.

The dangers of testing in a live environment

Monday, August 6th, 2012 | Jim Cheetham | Comments Off on The dangers of testing in a live environment

On August 1 2012, the New York Stock Exchange started to record significantly unusual levels of activity at 0930, as the markets opened. Trade rates were running at 30% above normal. By the end of the day one trader alone, Knight Capital Group Inc, had burnt over $440 million. The trades damaged market values for the whole day and almost destroyed Knight completely.

Technical analysts Nanex have posted a great analysis of the pattern of trades that enabled them to identify the likely origin of the trades, and to present a reasonable theory that seems to fix the facts: Knight seem to have released an internal trade testing application onto their production servers. It tested the live NYSE market. It lost money, because making money was not a requirement for testing.

It is hard to come up with a test environment for software that acts in a realistic manner, especially when you do not want to let the software itself be aware that it is being tested (because that will change its behaviour, and then it isn’t a proper test, is it?). It is also hard to construct tests that have to change the system state, when testing things that write to databases for example. And if you do accidentally run the test in a live environment, you can always recover from backups, right?

No, not always. Not $440 million’s worth of real-world money …