Issues of Digital Security |
Antony Funnell: It's hard to get away from the fact that for all the sophistication we humans have shown in developing new technology systems, we've got a surprisingly poor record when it comes to making them secure.
Hello, Antony Funnell here, welcome to Future Tense.
Today on the program, issues of digital security. We'll look at the role AI (artificial intelligence) might play in fending off hackers, we'll hear about a new and alarming report on the safety of e-health records, and we'll deal with an uncomfortable truth.
Monique Mann: We're really in an era of capitalist surveillance, and these tech companies themselves really make a lot of money through targeted advertising, and their whole business model is essentially built on a program of surveillance. As individuals sign up to use products and services by these companies, we've really made somewhat of a Faustian pact with them.
Antony Funnell: And that Faustian pact means major email providers like Yahoo and Google monitor the contents of your emails on an ongoing basis. Yes, your emails. And they continue to do it, because there's profit in doing so.
That was Dr Monique Mann, by the way, from the law faculty at the Queensland University of Technology.
First though, let's talk health. A recent report called 'Flying Blind' suggests that Australia's health sector is plagued by fragmented data. Here's one of the study's authors David Jonas speaking on RN's Health Report.
David Jonas: If you look at what we call points of service, which really is where health data is collected or produced every single day of the week, whether that's in the GP environment, pathology, imaging, surgery, hospitals, allied health, each of those is really effectively locked up in their own little silo of the provider. Small amounts are provided up to people like the AIHW for certain amounts of reporting and research, everything else is locked top. If we really look at the attitudes of consumers, as we point to in our report, something like 91% of consumers are happy to have their data joined up really for a whole range of benefits related to themselves, to health research and to planning and management of the system.
Antony Funnell: David Jonas from Capital Markets Cooperative Research Centre.
That silos might need to be broken down in the Australian health sector is fair enough, but new research from the United States should sound a warning about the need for better security.
James Scott is a Senior Fellow with the Institute for Critical Infrastructure Technology in Washington and his latest report is titled 'Your Life, Repackaged and Resold'. His report looks at the so-called deep web, that part of the internet that's not available through regular search mechanisms and which is often used by criminals.
On the deep web, says James Scott, there's a thriving market for e-health-related data.
James Scott: Health sector silo within critical infrastructure is vulnerable to attack because the treasure troves of information and data are virtually completely unprotected. A hacker with even a novice capability can go in, create a diversion with DDoS or Ransomware, not the network, find out where the vulnerable devices are, set up a beachhead for future attack or setup some type of remote access Trojan so they can go in and out of the system undetected, and also it's very easy to exfiltrate undetectable packet sizes of data.
We are kind of seeing that health sector organisations are beginning to at least layer their security, that way when a hacker gets through one layer they have to overcome another. And they are starting to use multifactor authentication, they're starting to use things like user behaviour analytics to detect abnormalities in user behaviour in case someone does have their user credentials compromised by an outside source.
Antony Funnell: How extensive is the deep web black market for electronic health records? Give us a bit of an idea of scale.
James Scott: It's massive, and there are markets and forums, buyers and sellers for just about every appetite. And electronic health records are extremely valuable because they have complete information, and many times including the Social Security number. So what can be done is obviously the first layout that the victim will experience would be identity theft, and that's easy for any novice on the deep web to start to capitalise off of. But that's the least of the concerns. There are so many other things that are happening now. They are taking that information and giving identities to victims of human trafficking. Illegal immigration, where someone goes and commits a crime and then that individual has to then prove that their identity was compromised. It's extremely taxing, and especially when you are on the deep web and you are looking at the new uses for information, and the new stalker forums. They have stalker forums where they can localise in your market if you are a sexual predator for example, or a paedophile. You can purchase 14-year-olds' electronic health records with accompanying social media footprint for real-time surveillance. That's the kind of thing that we are experiencing now, that's where this is headed.
Antony Funnell: And is part of the problem here that while healthcare organisations haven't kept pace with security, that other parts of the economy and society have? Are they particularly vulnerable simply because they are an easy target, given that the others have strengthened their procedures?
James Scott: Yes, they are targeted more because…well, previously the financial sector was most targeted. Then they shored up their vulnerabilities and became more difficult to penetrate. So adversaries made a lateral move over and picked the next easiest and most vulnerable industry and that is the health sector. And until the health sector becomes just as difficult as the financial sector to breach, they are going to continue to be victimised, and the actual individuals who have their records exfiltrated will continue to be re-victimised and re-victimised.
Antony Funnell: You mention in your report that the FBI argues that healthcare related identity theft is much harder to detect, taking almost twice as long as identity theft in other areas. Why is it so difficult by comparison?
James Scott: I think that with having your credit card stolen or something like that there are alerts that the financial sector has in place if there is some type of abnormality in the geography of your purchasing. So with the health sector those protocols are not so readily in place. So you can go on and on after a hospital is breached. And maybe you are looking in the wrong place for identity theft, you're looking for loans that are being taken out in your name, you're looking in obvious places as opposed to someone using your identity to purchase large amounts of Ritalin at multiple pharmacies.
Antony Funnell: And in the United States, as I understand it from your report, a healthcare provider isn't obliged to report a breach unless that breach involves 500 patients' records. So there would be a lot of people out there who would have had their records stolen or hacked into, and perhaps they are never told about that.
James Scott: Especially at the smaller doctors' offices in rural areas and lower income areas. The thing is most of these physicians don't even know that they have been breached. There is someone sitting on the backbone of their network just exfiltrating data at will. That's how it is across the board. Hospitals, insurance companies, they are breached and there could be multiple adversaries within their network, but unless they have layered security that not just detects and responds but predicts with artificial intelligence now, they are not going to be able to detect them.
Antony Funnell: And the individual victims of electronic health record theft, tell us about those. What sort of consumer protections are there, say, in the United States in this area?
James Scott: Not many. They will typically get identity theft protection from the hospital. But now there's a lot of discussions around class-action lawsuits and things like that. And the downside is insurance companies have money so they can make payouts, but hospitals for the most part, they are basically non-profits or not-for-profits. So we are not seeing a lot of the movement that is proactive by these organisations to shore up their vulnerabilities.
Antony Funnell: Just take us back a bit to the deep web markets themselves. How sophisticated are they as black markets? How do they actually function?
James Scott: They function just like eBay. They have their own review systems. You can leave reviews. They have star ratings. They are just as review driven as eBay, Amazon, any of these vendor type platforms. You have the market places in forums that are easy to get into and then you have some that you have to have a referral into, you'll pay more but it's a better or easier customer experience.
Antony Funnell: So I guess the question is if it's possible for an organisation like yours to do research into this area, to at least start to identify some of the marketplaces, why is it so difficult for authorities to put a stop to this?
James Scott: I think they are trying. The problem is when you have local law enforcement, for example, who tries to investigate a local breach at a hospital or…it's so easy to see that they are police, it kind of makes everyone in the forum go dark. And so even with researchers like us, everything stops until that group goes to a different forum because they don't know what they're doing. And it really becomes complicated when you have multiple local law enforcement agencies in one forum investigating a breach.
The FBI, their method is to discover who one of the vendors is and then turn them so that they can use those credentials to investigate. So I think that law enforcement is becoming better at investigating, but this is a completely different skillset than you learn in college or anything like that. It's kind of like a cyber technical social engineering aspect that you have to really become good out to be able to go in and investigate without being detected.
Antony Funnell: James Scott from the Institute for Critical Infrastructure Technology in Washington. And on the Future Tense website you'll find a link to their report. Let me give you the title again, it's called 'Your Life, Repackaged and Resold'.
Think of Las Vegas and security and you're likely to think of bouncers. But once a year they stage a major international security event in the casino capital called the Black Hat Cybersecurity Conference.
Now, this year's Grand Cyber Challenge involved artificial intelligence.
Man: So what will see here today is much more than a world's first all machine hacking tournament. The lessons learned could lead to a world where computer viruses, malware and attacks are discovered in a matter of weeks, days or even seconds. Right now it takes…
Antony Funnell: And one of the attendees was a guy named Matt Devost, who's the Managing Director and Global Lead for Cyber Defence for a company called Accenture.
I got in touch with Matt to find out more about the challenge. And I began by asking him to set the scene, to give us a brief description of how we currently defend our computer systems against cyberattack.
Matt Devost: To date we've relied primarily on standard technologies, we've taken a very perimeter driven approach for a number of years in which we say we will protect ourselves from threats on the internet by installing firewalls and gateways and proxies and things that protect us from the attackers that take place. And then on the chance that something gets within our network, a virus or has malware component associated with it, we will use antivirus software that is signature-based, that is dependent on knowing that a piece of software is bad in order to identify the fact that it is at risk and needs to be remediated. So that has really driven security over the past decade-plus, is this perimeter mentality; we will build walls, we will build firewalls and then we will rely on signature-based technologies to help us in the event that there is a piece of software, an intruder that gets in on the inside.
Antony Funnell: And that's still a very human-centric approach, isn't it, despite the application of technology.
Matt Devost: We've been able to automate the process to a certain extent, but you do still rely on humans making that determination as to whether something is good or bad. A lot of times they might be able to automate it based on other known bad pieces of software or traits within the software that are indicative that it's from the same malware family et cetera. But we really still are fairly dependent on humans being a key component in the process.
Antony Funnell: The recent Black Hat cyber security conference that was held in the United States saw an interesting hacking competition held involving artificial intelligence and cyber security. Could I get you to give us a bit of an overview of that particular competition?
Matt Devost: Yes, so it was a competition that was held by DARPA in the United States, which is our Defence Advanced Research Projects Agency. DARPA is looking at handling very hard challenges from a technical perspective and trying to figure out the art of the possible. They have for a couple of years now run this cyber grand challenge, which is basically can you create systems that are automated, that can operate without human intervention and engage in attack and defence activities. So the culmination of that was a contest that took place at the DEF CON hacking conference right after Black Hat in Las Vegas in which they put the frontrunners from this contest in an environment together with no human beings, completely air gapped, no intervention whatsoever, and let them engage in this network attack and defend activity against each other.
Antony Funnell: So these were automated anti-hacking systems trying to look for vulnerabilities in each other, is that correct?
Matt Devost: Yes, they were looking for vulnerabilities in the network that they were plugged into. So they would create during these capture-the-flag contests a network that has nodes on it that have certain vulnerabilities that might be known or unknown. And what the systems were doing was they were, proactively on their own based on the programming and heuristics that their developers had put in them, they were searching that network, finding vulnerabilities that could be exploited, exploiting those vulnerabilities and then patching the vulnerability. And the reason that you patch it is with capture-the-flag you want to make sure that you maintain possession of the flag, so you are trying to fix the problems that might lead a competitor from gaining access to the same system.
As with any contest, it's no fun unless you can watch the score, so they actually devised a means in which the score was being kept on this air gapped network, and then the results were being burned to a Blu-ray drive, and that Blu-ray disc was being transited between the air gapped network and the audience network via a robotic arm. So it really was a technical airgap. The only way to get data across was by burning it to this Blu-ray and moving it with the robotic arm. That allowed for the audience to watch what was happening on the network in quasi real-time. There was a slight delay given in the data latency issues. But otherwise the machines were acting completely autonomously on this network.
Antony Funnell: And what was the result?
Matt Devost: The result was fascinating from my perspective as someone who has been in the security field for 20 years and feels fairly in tune with the latest developments, and I feel like we made a great leap forward in the capability that the machines have from a machine learning, automation, automated defence perspective.
From my perspective they performed at a level that I wouldn't expect we would have achieved for another five or so years. So it was one of those technology surprise type issues in which I was legitimately impressed with the activities that this contest was able to engage in.
Antony Funnell: A simple question; why would it be better to have an automated anti-hacking system rather than one that has some degree of human intervention?
Matt Devost: The benefit is the system can obviously learn at a level that is much faster and much more comprehensive than a human being from an expertise perspective or what they are able to observe on the network by way of behaviour that is anomalous or indicative of something being bad.
The other aspect is it's just a scale. We've seen over the years automation and network attack. The attackers are writing tools that allow them to do real-time morphing of the code that they are using to prevent their signature from being detected. They are automating and introducing machine speed components on the attack site. So it's essential that we need to include those components on the defence side as well.
Antony Funnell: Are there are risks though?
Matt Devost: I think the risks are obviously that as you improve the automated defences you are also going to see a correlation in the improvement of automated attack. And as I mentioned as part of this contest, to engage in the capture-the-flag activities the systems were first attacking the nodes on the network. The potential that you have is that some of these technologies will be developed, unleashed either with unclear intent that kind of morphs and causes some kind of damage, or are released by actors in which they say, you know, 'my intent is I want the system to figure out a way to go after a national power grid', and it's got no more context other than that and then the machines try and learn and achieve that target and there's no way to trace it back to its cause or its route. The autonomy introduces great advantages but it also introduces great risk in that we are giving up some control to the machines as part of this process on both the attack and the defence side.
Antony Funnell: Are there ways to prevent the misuse of this type of technology?
Matt Devost: I think that's very difficult. As we've seen over time with technological advancement it's very difficult to put the genie back in the bottle once it gets out. You know, we had a lot of governments trying to control encryption 20 years ago unsuccessfully. There are other examples of technology that has proliferated.
I think what we might see is more the equivalent of non-proliferation type monitoring, watching for and monitoring the research that takes place to make sure that we understand and can place some constraints on it while it's in development. I think that is probably the best that we can hope for at this point.
I think it is inevitable and I think it is a good thing from the perspective of defence. We've shown that defence does not scale from a cyber defence perspective. At least here in the United States we talk about the skills shortfall and the tens if not hundreds of thousands of jobs that can't be filled on the cyber defence side. We've seen that the attackers are able to automate the attacks, so we need to have these technologies that automate the defence as well if we are going to be able to scale and keep systems secure over the long term.
Antony Funnell: Well, Matt Devost, thank you very much for your time.
Matt Devost: Thank you.
Interviewer: Edward Snowden, let me start with you. What quality, what characteristic defines a whistleblower? What makes someone such as yourself take such massive risks?
Edward Snowden: I think it ultimately comes down to you see a structure of laws where the legality of the times are becoming more and more divorced from the morality of our daily lives, the daily decision-making, the sort of values we hold. And when we have that you ultimately have to make a choice about what do you have a greater commitment to, the law or to justice?
Antony Funnell: Assumptions are often problematic, of course, but one might've assumed that after Edward Snowden revealed the mass surveillance undertaken in democratic countries, by democratic governments, that some of that behaviour might've tailed off. But according to Dr Monique Mann, there's still much to be concerned about.
Dr Mann is a lecturer in the School of Justice in the faculty of law at the Queensland University of Technology.
Monique Mann: First of all there's a lot I think that we potentially don't know, and the mass indiscriminate surveillance by the US and Five Eyes partners was really brought to light by the Snowden revelations, but we see more recent indications that these types of programs or other forms of surveillance are continuing. So, for example, one of the key revelations from Snowden was the NSA and also GCHQ program of intercepting undersea fibre optic cables.
Antony Funnell: GCHQ being the British equivalent of the NSA, the National Security Agency in the United States.
Monique Mann: Yes, the main sort of eavesdropping agency, you could describe it as. And it was only recently decided by the UK Investigatory Powers Tribunal that the UK security services, including MI5 and MI6, conducted this surveillance regime without proper safeguards or oversight for almost the past two decades. So that really shows that these are long-standing surveillance programs.
Antony Funnell: And this is the British agency spying on their own people.
Monique Mann: It's intercepting fibre optic cables or the backbone of the internet, and through the Five Eyes intelligence partnership this information is shared between Five Eyes partners.
Antony Funnell: And Five Eyes partners being Britain, Australia, Canada, New Zealand and the United States.
Monique Mann: Yes, exactly.
Antony Funnell: Now, they are countries that have a strong democratic tradition. The fact that they are still involved in this kind of mass surveillance of ordinary people and their communications, to some people would seem quite extraordinary.
Monique Mann: It's certainly extraordinary and it's certainly problematic from a human rights and privacy perspective. So for example the recent case brought by Privacy International to the UK Investigatory Powers Tribunal really showed that the collection of bulk communication datasets and bulk of personal information or bulk personal datasets was really quite problematic according to human rights principles, particularly in the European Union.
Antony Funnell: One of the other privacy concerns that you've highlighted is the fact that these government surveillance organisations are also continuing to put pressure on technology companies, on service providers to do their bidding for them. Could I get you to tell us about some of the recent incidents there?
Monique Mann: Well, Snowden's revelations particularly highlighted that tech companies, including Apple Microsoft and Google, were really accused of complicity in state surveillance programs, but most recently it was reported by Reuters that Yahoo scanned millions of emails looking for a digital signature at the request of US security agencies.
Antony Funnell: And there has also been a recent incident in New Zealand.
Monique Mann: Yes, so that was…very recently a New Zealand tech company, Endace, it was revealed they actually played a key role in creating the Medusa system that was used by GCHQ to harvest data and communications as part of the Tempora program.
Antony Funnell: There has been some push-back though, hasn't there, in recent times from these technology companies to these kind of requests from government for assistance with surveillance.
Monique Mann: Certainly there have been tensions with government agencies attempting to deputise or lean on tech companies to either provide data or build backdoors into security features to unlock encrypted information. The two key examples of this that come to mind are the Apple FBI case in which Apple refused to create a backdoor in the security features to unlock an encrypted iPhone. And the other example would be the Microsoft case where Microsoft has refused to hand over email content stored in Irish datacentres, arguing this would constitute an impermissible extraterritorial search by the FBI.
Antony Funnell: We know about those two cases, but it's difficult to know how much leaning is going on, how much pressure is being put on these organisations, simply because they are often not allowed to actually detail in any way that they've had a request from government.
Monique Mann: Certainly national security letters which are these warrants or requests for information from tech companies quite often come with gag orders prohibiting the actual disclosure that these requests have been made.
Antony Funnell: Many people would hear that and think that's a big tick for these technology companies, but they are not cleanskins at all when it comes to surveillance, are they. Tell us about corporate surveillance, and particularly in relation to people's emails.
Monique Mann: We are really in an era of capitalist surveillance, and these tech companies themselves really make a lot of money through targeted advertising, and their whole business model is essentially built on a program of surveillance. As individuals, if you sign up to use products and services by these companies, we've really made somewhat of a Faustian pact with them. So tech companies, including Yahoo, Google and also Facebook have reported scanning email communications for the purposes of developing targeted advertising, which is also known as online behavioural advertising.
Antony Funnell: So if I have a Gmail account or if I have a Yahoo email account, the chances are the company is actually reading or scanning at least my emails on a regular basis.
Monique Mann: Yes, it's listed in the Google terms of service, and really what they are trying to do is learn information about you, so what you like, more broadly your online activities, aggregating this in some circumstances with other data about you, particularly with the view to sell you something or predict what you are likely to buy. But it's not only email scanning, it also occurs through third-party cookies tracing your web browsing history, and an example of that is Google's ad tracking network DoubleClick.
Antony Funnell: People who are concerned about this or who realise that this is going on, some of them use encryption services to try and secure their email communications. But how effective are they? Because we have seen several governments pushing back against encryption.
Monique Mann: There have been discussions more broadly in relation to criminalising encryption or other forms of privacy enhancing technology. However, encryption, particularly end-to-end public key encryption, is a very effective method of concealing the content of your communications, as it allows only the sender and the receiver to read the content of messages and as information is converted into a secret code.
Antony Funnell: And there are encryption services that people can use that don't require a great deal of technical expertise. So encryption is open now to many ordinary people using email or other forms of communication.
Monique Mann: Certainly, encryption is now mainstream, and there are examples of services such as the messaging app Signal or, for example, email provider ProtonMail that not only encrypt information, they also implement principles of privacy by design, which means that they actually collect or can access very little information about you.
Antony Funnell: So encryption is one tool that people can use to deal with this situation, but if you don't use encryption there's not much people can do really, is there, given that these main service providers actually make their money from this type of surveillance, they are unlikely to change their practices anytime soon, are they?
Monique Mann: Well, in my view, if you aren't paying for a service then you yourself are the product, and here we see this kind of Faustian pact being made on this business model of capitalist surveillance. But at the same time trust is also very important, and there are numerous privacy trust measures that are compiled that also demonstrate that there is a clear business case for companies to invest in matters of privacy. So, for example, we also see an emerging market for privacy. A key example of that is subscriptions for virtual private networks.
With that said though, this is a very complex area with numerous competing interests and investments in personal data, but ultimately in my view these service providers have a duty to promote information self-determination, or individual control or autonomy over their personal information as a fundamental right. I think also there is an argument to be made here for service providers in particular to be more transparent and clear about their practices and also a more active role by regulators such as the Office of the Australian Information Commissioner in terms of developing guidelines, assessing compliance and providing oversight.
Antony Funnell: Lecturer in law Monique Mann, from the Queensland University of Technology.
Thanks to my co-producer her at Future Tense Edwina Stott, and sound engineer Steve Fieldhouse.
I'm Antony Funnell, I'll join you again next week. Until then, cheers!