United Airline Security Breaks at its Weakest Point: The Human Element

United Airlines seems to be lurching from one bad PR story to the another. This time, a United Airlines flight attendant accidentally posted the keypad access codes for airplane cockpit doors on a public website. The Wall Street Journal revealed the story, but did not identify the website or online forum where the codes were posted. Based on the available information, it appears the code leak was unintentional – pilots and flight attendants regularly use online forums such as Facebook groups for general discussion. This time however, one flight attended took the discussions a bit too far.

This was a significant breach of security without a hacker in sight. Just another case of the biggest security risk and the weakest link in the security chain – the human element.

Airlines maintain strict access control to the flight deck ever since 9/11. The keypad code alone would not necessarily grant access. The captain must also visually validate the person requesting access and only then unlock the door. Using the correct keypad codes does not entitle anyone to enter the flight deck. Access can be declined by the captain. United moved immediately to change all their cockpit door access codes and avoid the possibility of exploitation.

This story highlights the importance of staff training in the chain of ownership and control of security information, as well as regular training and refresher courses.

How significant is the human element in security procedures?

The alarming fact is that the human element contributed to 95% of all security incidents recorded globally by IBM in their Security Services 2014 Cyber Security Intelligence Index. The most common failures are opening unsafe email attachments, clicking on an unsafe website link, weak and easily identifiable passwords, losing laptops and mobile devices, not keeping software up to date or applying security patches, and so on. Humans quickly become blasé and bored by routine, losing sight of the rationale for maintaining alertness and sticking religiously to security procedures.

There is also an element of laziness, forgetfulness and the “it can’t happen to me” syndrome.

Planning for the human element in security defenses

Humans design the asset and facilities that security systems protect. They then design the security defenses around those assets, which are then used by humans. Humans make mistakes all the time and this critical characteristic needs to be addressed by security design, implementation and training.

The most effective remedy is frequent and relevant refresher training. Frequent and very short bursts that focus on a particular aspect of security work best and are least disruptive. The more dramatic and memorable they can be made, the better. The objective is to ensure as much as is possible, that the subject remembers this training at the point where it is needed. For example, when entering a door keypad access code, ensure that nobody can see the code being entered. It is the simple routine things that humans fail at, as time goes by.

The role of government in protecting human lives

The United Nations Human Security in Theory and Practice covers a much wider scope of what constitutes security, of course. However, it does acknowledge that “human security threats cannot be tackled through conventional mechanisms alone”. Governments have a duty to protect their citizens. While national security, anti-terrorism and highly visible security measures such as airport security screening are vital components, so is the education of the man and woman in the street. In wartime, the slogan was “Careless talk costs lives”. While not as dramatic, carelessness is the biggest threat to security defenses of all types and at all levels.

Governments can do more to raise public awareness of the need to maintain a simple but effective level of vigilance. Human security failings that lead to breaches cost money, reduce consumer confidence in technology, and are an attack vector for foreign and criminal hostiles. National security starts at home.

U.S. Senate Votes to Kill Privacy – Here’s How to Mind Your Browsing Habits

The U.S. Congress has passed the repeal of the broadband carrier privacy rules that required ISPs to seek consumer consent before selling Web browsing and app usage data to advertisers. The law was approved by the Obama administration and scheduled to take effect by the end of this year. It means that ISPs are legally entitled to sell their users’ browsing history to the highest bidder by default and without requiring permission.

Of course, free services like Google and Facebook have been doing this for years but people pay for a broadband connection. That makes it quite a bit different. While ISPs should provide an opt out, the measure erodes any vestige of privacy for surfing. People who are not reasonably tech-savvy may be totally unaware of the new playing field and fall victim by default.

So What Can You Do?

The usual precautionary measures such as clearing browser history and deleting cookies, setting browser to incognito or private mode, or installing software designed to prevent tracking by advertisers are rendered ineffective since your browsing sessions take place via servers and network of the ISPs. Instead, here’s a list of viable solutions to prevent an ISP from selling your Web data:

1. Apply to opt out

ISPs should provide this option. If not, users should contact them. It’s not exactly ironclad because many ISPs are vague about exactly how they track the online activity of their subscribers. The FCC fined Verizon $1.35 million last year because it neglected to tell smartphone users that it was using “supercookie” technology to track their browsing habits, regardless of privacy settings, and did not initially provide an opt-out.

 

2. Switch to a different ISP

Not all ISPs will implement this legal data sharing opportunity. Some smaller ISPs actually protested against the repeal. Changing ISPs may be easier said than done for users in rural locations where choice is limited or non-existent. However, expect some ISPs to develop a non-sharing policy as a marketing strategy. Watch this space.

 

3. Invest in a VPN

A Virtual Private Network is like an encrypted tunnel between a computer or phone and the Internet. In theory it guarantees privacy but not all VPNs are equal. Buyers should perform due diligence by way of researching expert reviews. That One Privacy Site is an excellent review and monitoring service run by a knowledgeable enthusiast for the tech and privacy savvy individuals. It’s probably best to avoid the free services because, who knows, maybe they might as well sell the browsing history of their subscribers. After all, they must generate revenue somehow. A VPN will slow down surfing and users may not be able to view high quality video streams on Netflix, for example.

 

4. Use Tor

Tor supplies a browser and service that hides your location and conceals your surfing activity so that it cannot be tracked. There are many thousand computers, called relays, in the Tor network, all provided on a voluntary basis by fans of the service. The service bounces the web traffic of its users between several of these relays on a random basis. Like a VPN, it slows surfing speeds. Also, it is for the technically minded only – or perhaps configured by a technically minded friend.

The privacy law repeal only impacts ISPs in the U.S., but the implications are applicable to Internet consumers across the world. Many countries have not enforced strict privacy laws to prevent ISPs and internet companies from freely selling consumer Web data. In this case, consumers are at their own discretion to adopt Internet privacy best practices.

Encryption Apps Let Trump Aides Break the Law – Would You Compromise Transparency for Privacy?

Senior members of the Trump administration have resorted to encrypted communication apps amid email hacking concerns that disrupted Hilary Clinton’s presidential election campaign and exposed sensitive conversations of Democratic National Committee (DNC) members to WikiLeaks.

The use of encrypted and disappearing messaging apps at the White House may violate the Presidential Records Act, which necessitates archival and retention of work related communication for transparency and litigation requirements. Trump aides are reportedly using the messaging app Signal that offers end-to-end encryption for personal communication. Maintaining a complete record of email communication at the White House is already a tough endeavor, and adding encrypted communication to the mix will only make it difficult to enforce the public disclosure laws.

At the same time, the Trump administration also fears White House staffers are using the technology to expose the government’s darkest corners without leaving a digital trail. The messaging app Confide with its SnapChat-like features of disappearing messages is allegedly being used specifically for this purpose by some U.S. government officials.

How Encrypted Messaging Works

Encryption is the process of converting information into a random and apparently meaningless data (cyphertext). Various mathematical functions or algorithms may be used to convert original data into encrypted cyphertext. The process of converting cyphertext back into its original form is called Decryption.

Modern cryptographic protocols utilize numeric codes known as Encryption Keys encrypt and decrypt the data. The Keys are available only to the sender and recipient of encrypted information. Third parties including hackers, government agencies, Internet Service Providers and even the encrypted messaging service providers such as Signal and Confide cannot legally access Encryption Keys. Private communication between end-users therefore remains elusive and untraceable by legitimate means.

Privacy or Transparency?

The concept of secure encrypted messaging is widely popular and appreciated for personal use, even in the White House. Regulations that necessitate data retention apply only to work-related communication that involves senior government officials to ensure transparency. To enforce public disclosure laws for communication that takes place via end-to-end encrypted messaging apps, the government will also need decryption keys to make sense of the encrypted data archived by the sender and recipient. To make matters more complex, apps such as Confide go a step ahead with disappearing messaging features that prevent users from archiving or even taking screenshots of the communication.

Third-party encrypted communication apps are skewed toward privacy, to a point where end-users cannot be held accountable for their communication, since the paper trail is entirely inaccessible and eliminated. In a public office or collaborative workplace, excessive privacy compromises transparency. The concerned authorities should therefore remain on top of the communication technologies used by employees and may need to regulate the use of end-to-end encrypted communication apps for work purposes. The tradeoff between privacy and transparency will emerge as a pressing need of the corporate world in response to rapid adoption of encrypted communication technologies by security-aware end-users.

Are the Chinese Reading your Texts Right This Minute?

About 700 million Android phones were sending the contents to a server address in China every 72 hours. The owners of the phones had no idea it was happening. The data known to be extracted included:

  • Location information (where you were, at all times)
  • Call logs (who you spoke with)
  • Text messages (including deleted messages)
  • Contact lists

The backdoor was in the firmware in a chip of a component part that was supplied to a large number of Asian phone manufacturers and at least one American one. It was only discovered when a security analyst bought a $50 infected phone for testing and noticed an unusually high level of network traffic when he powered it up.

Was this a one-off occurrence?

Roughly 5 to 6 new Android phones are released by manufacturers on the worldwide market every day on average. The greatest growth surge is in the Asian market where there is a proliferation of extremely cheap devices. Intense price competition means that manufacturing costs must be kept low, which invites supply of the cheapest components whose cost may be secretly subsidized by an interested party. The company involved, Shanghai AdUps Technology, supplies software to phone component manufacturers that can also remotely install apps on a smartphone and update them on demand.

Just how private is my phone’s data?

Forget any notion that a simple passcode keeps your phone data confidential. There are companies, many of them, who specialize in developing and selling equipment that can crack most any phone’s security and suck out the data contents in seconds. Those companies sell that hardware to government security agencies like the FBI and NSA both domestic and foreign, police forces, corporate clients, and most anybody who can pay the price. They may present a veneer of ethics by checking the credentials of potential clients but that is a flimsy defense against allowing the equipment to fall into the “wrong hands”. That is in addition to the shadier activities of software and component suppliers like the China company.

Why would they want my information?

A state actor would have zero interest in the phone data of the average person – only of individuals with access to facilities, organizations or activities (including criminal) of interest to them. The biggest usage by far is in the field of Big Data that is used for marketing or product development purposes by manufacturers and software system vendors. That is the purpose that the Chinese company said was behind their theft of phone data. But don’t for one moment think that a government agency would not suborn that data if the need arose.

How can it be justified?

All nation states have their own specific security interests and prioritized list of targets. They can use the excuse of national security, counter-terrorism, policing criminal activity, or any other rationale they choose. History is littered with examples of government agencies acting outside the law in most any country you choose to name. The Internet enables state actors to easily access any other state, attempt to hack into its agencies, government departments, defense contractors, banks, and even the general population as evidenced by the China mass phone hack.

Why is it so important?

Some people naturally think “what the heck – I don’t care because nobody would be interested in my boring old texts etc.” But that misses the point. Privacy is as vital a concept as freedom of speech or the right to vote. Just because you don’t bother to vote does not mean you don’t care about that freedom or right. Erosion of personal rights is a characteristic of despotic states. Any weakening of those rights and freedoms is a move in that direction.

Surprising Developments in Artificial Intelligence Cryptography

You may not have heard of the Google Brain team. They are a pretty fringe department, based out of Mountain View, California. Google Brain is, as the name suggests, all about A.I. development. Specifically, A.I. functionality achieved using neural networking.
Recently Google Brain took it’s 3 resident neural networks named Eve, Bob and Alice and gave them a little problem to work on. Alice was instructed to encrypt and send a message to Bob. Bob was instructed to decrypt the message from Alice. And Eve was instructed to try and snoop on the message. Alice and Bob were each given the same unique key to use as the encryption key for the message.
That’s it. They were not given any information or data on cryptography. They had to work from scratch, building their own cryptography algorithms. Over several runs of the test, Alice could develop a cryptography technique that Bob matched, and decoded the message. Eve managed to partially snoop on the message on several occasions, which only resulted in both Bob and Alice improving their cryptography algorithms!
Sci-Fi sexiness aside, the big takeaway here is that these neural networks invented cryptography algorithms that the operators didn’t understand. They were truly groundbreaking.

The Human Weakness in Cryptography
So far, the Google Brain team have managed to work on symmetric encryption of data. The current state-of-the-art in this field, that was designed by humans, is still AES. Provided the key is kept secret and side channel attacks are mitigated, AES is regarded as impossible to break with today’s technology, and when used with a 256 bit key, AES-256 is regarded as secure against tomorrow’s quantum computers.
However, the weakness is still key management. Us mere humans need to manage and secure our encryption keys. This means physically storing them somewhere and involving physical security, and also protecting physically stored keys with passwords. In the entire mathematical scheme of things, it’s the human factor that remains the weakest point.

Where to from here?
Is it possible that a computer can design a better cryptosystem than humans?
At first glance, this seems like something out of a Sci-Fi movie, but other areas of machine learning have seen machines become as competent if not better than humans. In 2014, the DeepFace project at Facebook achieved a level of facial recognition on par with that of humans, and automated facial recognition machines are commonplace at immigration desks in airports around the world, replacing human agents. Self driving cars are now safer than human drivers, with Elon Musk foreseeing a future where manual driving will become illegal..
In the view of the author, AI cryptography does have a long way to go. In particular, current “human” cryptography allows cryptographers to provide mathematical proofs of security. The challenge with AI is that no one understands how it works – so is provable security possible?

Yahoo 2014 Security Breach Exposes the Harsh Reality of Internet Security

Yahoo suffered a major security breach, affecting hundreds of millions of users – but we only found out about it 2 years later. This incident, combined with the 2012 DropBox breach, demonstrates the harsh reality of internet security: most breaches either go undetected, or unreported, for years.

The background to Yahoo’s security breach

Yahoo is the latest major player to reveal a previous hacking security breach and theft of user data. What the company describes as a “state-sponsored actor” stole 500 million user details in 2014. Yahoo has not yet revealed which state is under suspicion, how they came to this conclusion, or the mechanism he used to breach security.

The Recode website was first to publish the story on Sep 22 and later that day Yahoo confirmed the news on its Tumblr site. It follows hot on the heels of the recent Dropbox revelation that almost 70 million encrypted user access credentials were hacked and stolen in 2012. Dropbox has only now disclosed the extent of that breach. The Yahoo incident has implications for the proposed $4.8 billion Verizon takeover of Yahoo. Disgruntled users may launch a class action suit that could impact Yahoo’s balance sheet and depress the stock.

 

What happened?

The stolen data is reported to consist of passwords hashed using the bcrypt algorithm. The hacker also took user names and personal information, including birth dates, phone numbers, email addresses and both unencrypted and encrypted security questions and answers. A reputed cybercriminal named Peace offered the data for sale on an underground website, which brought the incident to light. He did not access more sensitive data such as bank and credit card details, which are stored in a different system.

 

What happens next?

Yahoo says they issued an alert email to all impacted users, prompting them to change any passwords that they have not amended since 2014. They also urge users to consider using a stronger authentication method than mere passwords. Yahoo also disabled all unencrypted security questions and answers.

Even though the stolen data is relatively innocuous in itself, it poses risks to users that are much greater than simply having their email accounts hacked. Criminals use this information to attempt other hacking attacks. They may also attempt to access an individual’s network of contacts for phishing and social engineering scams.

 

What can users do about it?

While Yahoo and other online service providers execute strenuous attempts to protect data, the lessons for us users are clear. We really do need to learn and utilize stronger account authentication measures. This self-discipline need not cost a penny but inertia holds us back.

There is still a large number of Internet users, including seasoned IT pros who should know better, who re-use the same favorite passwords over and over for multiple online accounts. Hackers know this all too well. They use known passwords to attempt to break into other accounts. This opens avenues for exploiting financial information and possible bank account or credit card fraud.

As well as using stronger passwords that free password generator services provide, we need to consider options such as two-step authentication. Many online services offer this in an effort to reduce and prevent unauthorized access and hacking. There are both free and paid password manager services that will store strong passwords that may be impossible to memorize. Many will automatically fill in the details when you visit a website and are asked to sign in.

Whichever option we choose to take, we must take action to improve our online protection. Break-ins like these will inevitably happen again despite the best efforts of the service providers. Doing nothing is no longer an option.

Red Cross Blood Bank – Australia’s Largest Data Breach

On Friday October 28 2016, the ABC published a news report about the disclosure of private blood donor information from Australia’s Red Cross Blood Service (ABC 2016).  The personal information of about 550,000 blood donors had been stolen, including names, addresses, and details of “at-risk sexual behaviour”. This is believed to have been Australia’s largest security breach.

Public reports to date such as that written by Troy Hunt claim the breach was not the result of a hack or forceful act, rather a discovery resulting from a common type of scanning activity.  Whilst indications suggest the spread of the data leakage is low, it calls into serious question how such a critical service could have had such a lapse in data handling practices.

Upon reading the reports of the breach, the technically minded would have spotted the poor, or absent data handling practices.  As Hunt states, “most organisations have a raft of different, systems, processes, people and partners that handle their data” (Hunt 2016), and based on his experience “it’s not unusual to see data pass through many hands. It shouldn’t happen, but it’s extremely common” [Troy Hunt’s emphasis] (Hunt 2016).

Mitigation

To reduce the chances of similar events happening again, rigorous data handling practices are needed.  Some of these practices include:

Data Anonymity

Personally identifiable data should be used only as a last resort. The default treatment should be to anonymise personally identifiable information.

Information Classification Scheme

A simple information classification scheme that assigns data according to its sensitivity and privacy requirements. An onerous classification scheme becomes unwieldy and is susceptible to misuse.

Needs-Based Access

This is a simple control, that in contrast to the information classification scheme, effectively classifies the people that can access sensitive and private information.

Encryption of private information

This is a technical control that provides a safeguard should people oriented controls be ignored or fail. The engineering of unique encryption-based solutions is a discipline demanding thorough knowledge and study. Organisations should adopt encryption solutions based on well-studied standards that allow owners retain control of private keys and passwords at all times.

 

Response

Should there be a breach, a computer emergency incident response plan that has been drafted and approved by senior management is an important tool. This encourages a coordinated response, and provides a lens through which energies can be focused.  For organisations without dedicated computer security resources, an external computer emergency response team such as AusCERT, which was involved in the Red Cross breach can provide expert advise and resources.

That such a preventable event could have afflicted both the Red Cross Blood Service and its donors is tragic.  Not just for the damage to goodwill, but to the likely reduction in the short to medium term of blood donations.  The Red Cross Blood service have responded in a transparent and honest manner.  They have not sought to shift blame and “take full responsibility for this mistake and apologise unreservedly” (Australia Red Cross Blood Service 2016).


Sources

ABC, 2016. Red Cross Blood Service data breach. Available at: http://www.abc.net.au/news/2016-10-28/red-cross-blood-service-admits-to-data-breach/7974036.

Australia Red Cross Blood Service, 2016. blood-service-apologises-donor-data-leak. donateblood.com.au. Available at: http://www.donateblood.com.au/media/news/blood-service-apologises-donor-data-leak.

Hunt, T., 2016. The Red Cross Blood Service: Australia’s largest ever leak of personal data. Available at: https://www.troyhunt.com/the-red-cross-blood-service-australias-largest-ever-leak-of-personal-data/.

Leonia AG Lost €40 Million ($45M) to Whaling Phishing Scam

Leonia AG is a 100-year-old company headquartered in Nuremberg, Germany, and is a global supplier of wiring systems and cable technology with 76,000 employees in 32 countries and a market cap of €1,015 Million ($1,140 M) listed on the Frankfurt stock exchange.

Yet this behemoth fell prey to a fairly simple spoof email scam in August that cost them €40 Million cash ($45 M) and has unsurprisingly resulted in a profit warning.

The company reported that fraudsters used fake emails and identities to target one individual in a successful attempt to transfer funds from a company bank account to an account controlled by the fraudsters. They picked a factory in Romania, which is only one of the four in that country authorized to handle international money transfers. The spoof emails purported to come from a senior director in Germany and apparently were accepted without question by the officer in Romania.

Whaling is the term used for the type of phishing scam that targets just one individual in a corporation. In order to succeed, the fraudsters carry out in-depth investigation of the company, its mode of operation, styles of communication, security capabilities as well as the target victim’s roles, responsibilities, staff and so on. Whether or not insider assistance was involved is not known at this point but the required information can be obtained and pieced together by clever and patient fraudsters who may use social engineering to ferret out small elements of the overall picture, which may appear innocuous in isolation.

To achieve this level of sophistication, fraudsters often create domain names that are so close to the real company’s domain name that a quick glance does not detect the slight name difference. An email coming from that fake domain, formatted in an identical manner to genuine emails, with similar language style and so on, can easily be accepted as the genuine article.

The core of the problem is that the fatal email was accepted as being genuine without question. The fraudsters invested time and expertise in investigating Leoni AG. Con artists have been honing their email phishing skills for well over 20 years and many have perfected their technique to the extent that their fraudulent emails and other identification instruments are instantly accepted by the victims. Fraud has always moved with the times but the public has usually been slow to cotton on. These attacks have been on the increase, according to reports following a similar attack last year.

What precautions can we take to safeguard against this type of scam?

1. Watch out for email ID with fake domain names. For example, if the official Leonia AG domain is @leoniaag.com, fraudsters might use similar domain names such as @leoniag.com to phish victims.

2. If you receive an email requesting financial transactions, pick up the phone and call the person. Never enter sensitive information into pop-up browser windows.

3. Use an anti-phishing and anti-spam service. It’s easier to get caught when you’re focusing on mission-critical business operations and can’t spare a moment to double check authenticity of the email senders. Security solutions will make your life easier.

4. When you must click through a link, hover the mouse over the link and see the actual URL – bottom right of the browser if you’re using Chrome. Make sure the links you need to access are valid and secure. Check for the HTTPS certificate. Don’t click shortened URLs.

5. Educate employees on all levels to ensure that they are security aware and up to date with latest phishing threats, prevention practices and solutions.

6. Make sure the attachments are valid and secure before downloading.

It does appear unusual that an officer of a company would execute such a huge financial transaction on the basis of one email alone. One might expect some basic security countermeasures, such as at least a phone conversation with the authorizing director, or a second authorization such as when corporate checks are cut. Such sheer common sense precautions are easy to implement.

Phishing and social engineering is now so commonplace that security firms offer training course for company staff, educating them on how to recognize the likely warning signs of a scam. Corporations have no excuse for not engaging in at least an awareness program but, no doubt, some will only realize that when it is too late.

MIT Researchers Devise TOR Alternative That’s 10x Faster

Tor (The Onion Router) is now 14 years old and the biggest bugbear that users consistently moan about is speed. Riffle is proclaimed to deliver significant advances in anonymity technology, which includes both more reliable anonymity as well as being 10 times faster than Tor. It is the new anonymity joint development by MIT and the École Polytechnique Fédérale de Lausanne. Riffle is still at the prototype stage and quite a way from becoming commercially available. Two applications have been developed, for microblogging and for file sharing.

Riffle’s approach uses multiple technologies, none of which are new, but they are layered and interact in a way that has not been done before. The overall effect is that messages are split and packets are delivered in a random sequence that is computed in advance (hence the riffle, or shuffle) and is verified at the receiving end so that the message is reassembled.

The claim for greater security of anonymity is based on Tor’s known susceptibility to hacking by introducing rogue code and predefined messages onto a node, one of its estimated 4,500 network servers. As the servers are owned and maintained by volunteers, the possibility of introducing a malicious node is obvious. The known messages can then be tracked through the network. Riffle’s architecture uses an anytrust model, which means that, so long as just one single node remains uninfected, network security is not compromised.

At its core, Riffle uses a Mixnet, a small number of networked servers, to perform the message shuffle. Unlike Tor, where messages are sent in a linear sequential manner from one node to the next, the first thing Riffle does is to send the messages to all servers in the Mixnet where a new hybrid “verifiable shuffle” of the already split message components is performed, which also creates a mathematical proof. This proof can be used to validate that the message has not been modified and protects from malicious interference with the Mixnet system.

The network nodes utilize shared private key encryption, which in turn depends on authentication encryption, and is used used in conjunction with the Onion Layer model of successive layers of message data. Each node receives the authenticated private key. This process renders the packets effectively indecipherable except to the network nodes, where each layer is stripped to reveal the next encrypted routing directions to the next node. Messages are retrieved by the receiving party using Private Information Retrieval (PIR) to further assist with client anonymity.

The 10x speed enhancement over Tor has been measured in independent tests. Riffle’s approach of the verifiable shuffle and PIR makes compute and bandwidth efficiencies that add up to a significantly faster throughput than what Tor can achieve.

At this early stage, the future for Riffle is still unclear. The security community will take it to pieces to fully test its potential and further validate (or disprove) its heightened security claims. If proven, it will no doubt be welcomed by Internet users living under oppressive regimes where staying alive can depend of total anonymity in Internet terms. Its speed alone may position it as “the new Tor” and see it take over the mantle of the most popular anonymity technology. Right now, it’s a watch and wait brief to observe its progress from prototype to something tried and trusted.

Cyber Criminals Demand Ransom for 655,000 Patient Records

The famous American criminal Willie Sutton was asked once why he robbed banks, to which he is reported to have answered, “Because that’s where the money is”. The statement is apropos to a question that many people are asking in response to the accelerating frequency of cyberattacks on hospitals. Because that’s where the personal information is. Personal information equals money! In fact, it is estimated that personal information is worth ten times more on the black market than a credit card number. As Paul Syverson, Co-creator of the Tor web browser says, “Your medical records have bullseyes on them.”

Therefore, it should come as no surprise to read the numerous headlines in 2016 concerning cyber attacks on healthcare organizations. The year started with a highly publicized ransomware attack on the Hollywood Presbyterian Medical Center in February of this year shut down the hospital for nearly a week until management agreed to pay $17K to the cyber criminals.

Unfortunately, that attack proved simply the opening shot across the bow at the health care industry. Earlier this summer a trio of data breaches culminated in a loot of 655,000 patient records. The breach was discovered when a hacker or hacker group using the name, “The DarkOverLord,” a former ransomware expert who has now chosen pursue the high stakes game of stealing patient health information records or PHI. The breach was discovered when the hacker contacted the three health organization involved to alert them that their patient databases had been captured and that samples had been posted on a site called RealDealMarket, a unscrupulous site on the dark web where cybercriminals sell everything from stolen credit cards to drugs.

The data breach included the following:

  • 48,000 patient records from a clinic in Farmington, Missouri, United States. The records were acquired from a Microsoft Access Database in plain text.

  • 210,000 patient records from clinic in the central Midwest United States that was captured in plain text. The records include Social Security numbers, first and last names, middle initial, gender, date of birth, and postal address.

  • The largest breach was a database of 397,000 records from a large clinic based in Atlanta, Georgia which also included, including primary and secondary health insurance and policy numbers. Like the other incidents, the data was not encrypted.

The DarkOverLord is demanding a ransom of $1 per record from each of the organizations and has assigned a separate deadline to each victimized organization. If his demands are not met by those dates, the records will then be sold to multiple buyers. The hacker claims that he contacted all three organizations prior to stealing the patient records to inform them that he had breached their networks and was asking for funds to inform them of their vulnerabilities but heard nothing. “Next time an adversary comes to you and offers you an opportunity to cover this up and make it go away for a small fee to prevent the leak, take the offer,” The Dark Overlord said in an interview to a news site that reports on the hacking community.

The three attacks all share the same means of incursion as they were affiliated with a third party health care information management application. The hacker was able to infiltrate the vendors network and took advantage of several SQL exploits. The attacker(s) then used a zero-day RDP exploit to gain access to the three clinics.

All three clinics contacted their patients to alert them of the breach and the impending risk of identity theft. In the case of the Atlanta based firm, local police have already begun documenting police reports from patient victims reporting that their credit has been compromised. All three organizations must now suffer major hits to their credibility and reputation and impending lawsuits will undoubtedly be coming soon. According to a study in 2016 by the Ponemon Institute, the average cost per stolen record in the United States healthcare industry is $355 and $158 globally.

All of this points to the importance of encrypting your data, especially in the cloud. The of storing data in the form of plain text is over. No one wants to ever be contacted by a hacker.