Friday, November 23, 2018

Navigating Cyber

Back in the mid-1970's, I was an undergraduate studying Cybernetics and Instrument Physics in the Department of Cybernetics at Reading University in the UK. I was attracted to the ideas of feedback and control theory introduced by the MIT mathematician Norbert Wiener in his landmark book, Cybernetics: Or Control and Communication in the Animal and the Machine, which I had devoured in my final year of high school.

Those ideas were eventually subsumed into a variety of specialized fields - missile guidance systems, bionics, artificial intelligence, economics and econometric modeling, ecology, general systems theory and others. Meanwhile, my own journey took me into electronics, personal computing hardware, software, operating systems, software development, networking and eventually, the destination for many of us old jacks-of-all-trades, computer security.

Titles are more a matter of fashion than semantic precision, so in due course, I became an information security consultant - or, for added gravitas, an information assurance consultant. Whatever it was called, it was a long way from my starting point of cybernetics.

And then, suddenly, the circle closed. "Oh, you work in cyber security?", I was asked.

And so several years of teeth grinding began.

The term "cybersecurity" began its rise to popularity thanks to a National Security Presidential Directive issued late in the George W. Bush Presidency. NSPD-54 of January 8, 2008. The accompanying memorandum (now declassified) defines "cybersecurity" to mean "prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and non-repudiation".

The prefix "cyber", in this context, seems to relate to the next definition: "cyberspace", which means "the interdependent network of information technology infrastructures, and includes the Internet, telecommunications networks, computer systems, and embedded processors and controllers in critical industries".

The term "cyberspace", in turn, was coined by the science fiction author William Gibson in his 1982 short story collection Burning Chrome, but really popularized in his novel Neuromancer. Wiktionary suggests it is a "Blend of cybernetics + space".

So, where does "cybernetics" come from? Although Wiener coined the modern meaning with his 1948 book, its etymology begins with the Ancient Greek, κυβερνήτης, ("kubernetes") which means a steersman, pilot or navigator. The κυβερνήτης was the man holding the steering oar at the rear of a galley (something I know from another course at Reading, The History of the Warship, taught by an eminent materials scientist with a classicist bent, Prof. J. E. Gordon). It was the steersman's job to keep the ship on course, despite the vagaries of wind, tide and currents, maintaining a course for the next headland.

It is from this root that we get the word governor - originally referring to the centrifugal governor, an arrangement of spinning brass balls which levered a valve open and closed to regulate the speed of a steam engine - perhaps the earliest example of a feedback system applying proportional control.

And of course, the same root gives us the terms govern, government and governance. The latter is important in information security - corporate governance is the arrangements for oversight of management which acts to correct things when an enterprise is "off course", and leads in turn to information systems governance and information security governance.

So, in that sense, we're still in the realm of cybernetics.

The change from information security to cyber security has wrought some changes, though. The former term encompasses information in all its forms; it extends beyond computer and network security to cover paper (which is why we have information handling rules for classified documents, cabinets, safes, etc.) as well as tacit knowledge. But "cyber" carries other connotations - it abandons the physical world and replaces it with allusions to robots, androids, Dr. Who and the Cybermen and the online world.

On the other hand, "cyber" also connotes machines and brings us to the world of cyber-physical systems - drones, autonomous vehicles, industrial control systems, power plants, factory and warehouse robots, and even bionic devices such as pacemakers, bionic limbs and implantable cardioverter defibrillators. While risk management for infosec specifies impact on information assets in dollar terms, now we have to think in terms of injuries and life safety.

When people talk about "cyber security", though, what do they actually mean? In my experience, they're really talking about the security of things connected to the Internet, and securing systems against attacks delivered via the Internet. This tends to de-emphasize insider attacks, taking us back to the firewall-centric, M&M model of information security: hard and crunchy on the outside, but soft and gooey on the inside. That view was always problematic - all the more so with the move to deperimeterization and cloud services.

In the end, we still don't have a well-defined term for what we do - "systems security" might be the most appropriate, in my view. But at least you know why, when you talk to me about "cyber" ("cyber all the things!") you are met with a quizzically raised eyebrow.

You keep using that word; it does not mean what you think it means.

Friday, October 26, 2018

How to FAIL at Online Customer Service

In just over a week, I'm flying to the US for a conference (the NIST NICE Conference, as it happens). My office has booked my flight with Qantas, and today I received an email from the airline offering me the chance to upgrade for a combination of frequent flyer points and cash.

Clicking the "Make your offer" button in the email takes me to a page where I can select how much I wish to bid for the upgrade, using a slider. The instructions state:
  1. Select the flight segment(s) you wish to upgrade. If there’s more than one flight segment, you can choose to make an upgrade offer for some or all of the segments. If you do not wish to make an upgrade offer for a segment, move the slider to the left to indicate ‘no offer’.
  2. Adjust the slider to show the amount of money you want to offer and if you are a Qantas Frequent Flyer member, input the Qantas Points you want to offer and select the update button.
Only, there's no slider. In both Firefox and Chrome, the part of the page where the slider should appear looks like this:


The slider is probably supposed to appear either above, below or between the "No Offer" label and the maximum amount of $4,105. But it's not there.

So I open the browser console, and there's a very obvious JavaScript error:

The error is in this line:

plusgrade.page.modules.bid.slider.loyaltyPointsFormat.groupingSeparator = ","; 

You can't set a property of an object that doesn't exist!

It turns out there is no way to make an offer and get an upgrade.

Now, I'd like to notify Qantas of the problem with this process (which is outsourced, by the looks of things). I'm helpful like that. I mean, how many customers are willing and able to provide you with this kind of console log information to quickly resolve a problem?

Only, there's no way to get the information to Qantas. Replying to the email simply gets a bounce: "Please note that this email is unattended.". Clicking on the "Online help" link at the foot of the email leads to https://qantas.custhelp.com/app/ask, which, despite being titled "Email Us" has nothing to do with email at all. Rather, it uses a pair of "Category" and "Sub category" drop-downs to try to categorise the user's query. After carefully inspecting these drop-downs - a time-wasting exercise if ever I saw one - there appears to be no way to direct a query relating to the upgrade process. I could search a library of FAQ's, but I don't have a question - I have a failure. I don't think searching the FAQ's for "loyaltyPointsFormat undefined" is going to get me very far, do you?

This is the heart of the problem; in an attempt to reduce customer service costs, the company has ensure that the customer cannot obtain service. Worse, it wastes the customer's time.

So, I'm writing this up in hopes that, one day, someone from Qantas will stumble over it and fix their broken upgrade bid process - and, more importantly, provide a way for customers with problems that Qantas doesn't anticipate - and they're the ones they really need to hear about - can get service. Until they do, they're going to miss out on business, fail to maximize revenues, give users a lousy customer experience (CX) and drive down their NPS (Net Promoter Score).

I'd call that an epic online customer service FAIL - wouldn't you?

Update: Afternoon of 29th October - three days later, somebody has finally woken up and the problem has been fixed. We can finally see the slider, and - wow! a little dial which provides a graphical indication of where the slider is positioned! Somebody is really thinking through the usability!

Oh, well. At least it's working. . . for now.

Tuesday, April 3, 2018

Installing YARA from Source Code on CentOS 7

A short post - really more of a reminder to myself - on how to install YARA on CentOS Linux 7.

CentOS is an enterprise Linux distribution, and as a consequence aims for stability - it tends to have older versions of many software packages. This can make installing some software a bit of a challenge.

YARA is a pattern-matching program for use by malware analysts - it's a kind of Swiss Army Knife that can calculate hashes, perform string and regular expression matching, and understands various binary executable formats, like PE - most of the techniques that are useful for finding and investigating malware.

Preparation


Installing the various required packages can only be done as root. Rather than prefixing each command with sudo, just su to root
# sudo su -
Many of the packages required by YARA are also a little ahead of the standard CentOS releases - but that's common for up-to-date versions of many programs, like PHP and others. So you may already have the first requirement - a Yum configuration for the EPEL (Extra Packages for Enterprise Linux) repository. If you have, then you're good to go - otherwise, enable EPEL with the command
# yum install epel-release

If that doesn't work, because you don't have the CentOS Extras repository enabled, then try this:
# rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
Now you're ready to start installing the required packages. Start with GNU Autoconf and libtool:

# yum install autoconf libtool

Then add the OpenSSL development files:

# yum install openssl-devel

If you intend to use the YARA cuckoo and magic modules:

# yum install file-devel
# yum install jansson jansson-devel

Finally, the latest YARA rules require Python 3.6, so if you don't have it:

# yum install python36 python36-devel

Installing YARA itself


From this point, everything goes as per the instructions at http://yara.readthedocs.io/en/v3.7.1/gettingstarted.html. You should drop your root privileges (exit or switch to another session) then download the latest version of YARA. From that point, it goes something like this:

$ tar xzvf yara-3.7.1.tar.gz
$ cd yara-3.7.1
$ ./bootstrap.sh
$ ./configure --enable-cuckoo --enable-magic
$ make
$ sudo make install

Finally, run the YARA tests:

$ make check

Among the output that follows, you should see:

PASS: test-alignment
PASS: test-api
PASS: test-rules
PASS: test-pe
PASS: test-elf
PASS: test-version
PASS: test-exception

[...]
============================================================================
Testsuite summary for yara 3.7.1
============================================================================
# TOTAL: 7
# PASS:  7
# SKIP:  0
# XFAIL: 0
# FAIL:  0
# XPASS: 0
# ERROR: 0
============================================================================

If all is correct, you're good to go! Have fun, and nail that malware!

Friday, March 30, 2018

Optus Cable with Google Wi-Fi

We had Optus Cable installed yesterday, replacing an aging ADSL connection. We already had Google Wi-Fi installed, replacing a complicated setup consisting of a Linux-based firewall and multiple access points, but ADSL had become painful, with disconnections whenever it drizzled, let alone rained, and one phone line not working at all (perhaps disturbed by a linesman while trying to get the ADSL fixed).
Up with this, I shall not put!






I had no intention of downgrading our Google Wi-Fi setup with the somewhat primitive devices Optus supply, so the problem was to get the combination working. Google Wi-Fi can lose some functionality if hidden behind another router, so I had googled for information on the Optus-provided devices to see how they performed. Posts on discussion boards suggested the Netgear CG3000 could be configured as a bridge via some barely-documented settings, while the Sagemcom devices should be avoided at all costs. With that in mind, I selected a plan that provided the CG3000 and figured I would let the Optus technicians get it working and then figure it out. I also took the precaution of buying a spare CG3000 - just so I could replace a Sagemcom if worse came to worst, or perhaps have one configured the way I want and the original to put back into place if necessary.

In the end, there was far less drama than expected. The Optus techs turned up with a Netgear CM500V modem and a separate Sagemcom 3864V3 router. I let them install it, connected to it via my laptop to show it was all working, and bid them adieu.

Then I unplugged the Sagemcom and put it back in the box and performed the following procedure:
  1. Switch off the CM500V. This is necessary, as the modem remembers the MAC address of the router it is connected to and will not talk to the Google Wi-Fi router without a reboot.
  2. Switch the CM500V back on again. It may take a few minutes to connect, so get it started while you're doing the rest of this procedure.
  3. Unplug the Google Wi-Fi router from the ADSL modem.
  4. Check the Google W-Fi router has realised it is offline - it should show a pulsing amber light.
  5. Turn off mobile data on your phone, then run the Google Wi-Fi app and go to Settings -> Network & General -> Advanced Networking -> WAN. The WAN settings are not editable unless Google Wi-Fi is offline and your phone is talking to it directly on the wireless LAN.
  6. Change the WAN settings to "DHCP" and tap "Save".
  7. Check the CM500V - the Power, Downstream, Upstream and Internet LED's (the top four) should all be solid green by now.
  8. Plug in the Ethernet cable from the Google Wi-Fi WAN port to the modem. Give it a few seconds - the Ethernet LED on the CM500V should turn green and the Google Wi-Fi router will get its WAN IP address via DHCP and should settle down to a stable white light.
  9. Phew! That's better!
  10. Go back to the Google Wi-Fi app Shortcuts page, tap "Network check" and then "Test Internet". Marvel at the impressive speed test result!

The NBN HFC Internet connection box sits, forlorn, on the wall of our house while nbnco tries to figure out how to get DOCSIS 3.1 up and running on the cable. We just couldn't wait that long in the end. However, I expect this procedure should work just fine with an NBN cable modem.

Final caveat: I haven't tested the CM500V with a phone, since we have an Asterisk VoIP setup. But I've no reason to suspect switch to the Google Wi-Fi setup will affect phone operation in any way.

Sunday, April 30, 2017

Another Chromebook Use Case

Recent restrictions on traveling with laptops have caused difficulties for business travelers.

My better half recently booked airline tickets to visit family in the UK, traveling with a codesharing combination of Qantas (Sydney - Dubai) and Emirates (Dubai - Birmingham). This is a much more convenient alternative to taking QF1 all the way to Heathrow and then organising land transport to the Midlands, but even QF1 still transits through Dubai, so would be subject to the same problem.
The Acer Chromebook 14, in Luxury Gold trim.

The Problem

Recently the US instituted a ban on passengers traveling from several Middle Eastern airports carrying electronic devices in their hand baggage. The ban applies to tablets (including some of the older, larger Kindles), laptop computers and other personal electronic devices, and apparently is based on received intelligence on bomb-making techniques.

This occurred a few weeks after my wife had bought her tickets, and we were initially unconcerned - until the UK followed suit, and specifically added Dubai to the list of ports concerned. Although this was a family visit, my wife needs to run her business while traveling, maintaining contact with clients and working on project reports and presentations. She had previously taken her Windows laptop for this purpose and so we initially considered how this could be done in the light of the new restrictions.

The most obvious alternative to hand luggage was to put the laptop into checked luggage. But there are problems with this approach.

Firstly, airlines (and aviation regulators) have specific rules for the carriage of dangerous goods, and lithium ion batteries feature quite prominently in the dangerous goods list. For example, Australia's Civil Aviation Safety Authority provides quite detailed advice to passengers ("Travelling safely with batteries and portable power packs", available online at https://www.casa.gov.au/standard-page/travelling-safely-batteries) and is quite clear that spare batteries must be in carry-on baggage only, because of the risk of fire.No advice is provided in relation to batteries installed in devices, probably because of the expectation that passengers will carry expensive and fragile devices as carry-on baggage anyway.

However, a laptop packed into a suitcase - especially a zip-up lightweight suitcase - poses its own risks. First, there is the possibility of theft; I have personally had electronics stolen from a checked bag, presumably by a baggage handler. Secondly, there's the possibility of damage - suitcases are stacked up in containers for loading into the freight holds of large aircraft, and a lightweight suitcase at the bottom of a pile could be subject to considerable pressure and deformation. Finally, the natural inclination is to wrap the laptop in soft clothing to provide protection against the shock of dropping - but what if pressure on a power switch or deformation of the case causes the laptop to power up? It is quite likely to overheat since the clothing will block the air vents - and the clothing is also likely to be highly flammable.

For these reasons, we rapidly ruled out the idea of packing the laptop in a suitcase - and I hope everyone else does, too.

The airline eventually proposed a scheme in which passengers transiting Dubai could surrender their laptops for carriage in the hold - but this is unattractive, too - since the laptop bag is the obvious place to store travel documents (e-ticket, passport, etc.) and in-flight requirements. Surrender the bag, and you lose access to those, or have to have yet another bag to carry them; surrender the laptop without the bag, and it is unprotected. Both cases still leave an exposure to damage, loss or theft. Not a comfortable option, either.

The Solution


What to do, then? Fortunately, there is an easy alternative: order a Chromebook in advance of travel, for delivery to a UK address, and that is what we chose in the end.

I drew up a short list of requirements for the various alternative solutions to the problem:

  • Functionality. The device has to support essential business applications: email/calendar, word processing, spreadsheet and presentation graphics.
  • Low cost. If we acquired a device just for use on visits to the UK, it would only get used for a few weeks each year, so a high-cost device is not justified. This requirement extends to software licences as well.
  • Low maintenance. The machine would lie unused for three to six months at a time, and if the first task on arrival was to install updates and patches, requiring multiple reboots and lots of interaction (e.g. via the Help -> About menu option in Mozilla applications), that's time badly spent on a short trip - but if not done, security exposures would result.
  • Security. If the device is stolen, lost, lent to a third party, etc. there should be no exposure of sensitive data on the device and no threat to system integrity.
  • No interruption to work, and no work lost. Locally-stored files, e.g. on the hard drive of a Windows laptop, could accidentally be left behind, requiring work to be done all over again.
  • Simplicity. We wanted to avoid complicated schemes of copying files to and from USB keys or compact flash. This poses too much risk of an old file over-writing a newer version.

Fortunately, the use of a Chromebook meets these requirements perfectly. Since my wife's business uses Google GSuite (formerly Google Apps), she is already familiar with some of its components and uses them, particularly for collaborative projects. So we knew the functionality requirement was met. We already have another Chromebook and a Chromebox, so the device is familiar, too.

The Chromebook meets the low maintenance requirement quite easily, as there's very little on the device itself to be updated, and that is taken care of with a few minutes downloading and a ten-second (at most!) reboot. All applications are cloud-based and continually updated.

Security, simplicity and the requirement for no work to be lost are dealt with by the fact that the Chromebook and GSuite are cloud-based. All she had to do was transfer more of her work to GSuite in the weeks leading up to the trip, and all her work documents were available immediately upon initial login. Similarly, she can leave the Chromebook behind and upon arrival, immediately resume work. Everything is stored in the cloud; nothing is stored on the machine. And because we use two-factor authentication with security keys, there's no real possibility of someone using the machine to gain access to her data. For the same reasons, the family member charged with storing the device is relieved of a lot of responsibility.

Finally, cost: the Acer Chromebook 14 is only GBP199.00 from Amazon.co.uk (see https://www.amazon.co.uk/Acer-Chromebook-CB3-431-14-Inch-Notebook/dp/B01MY6VFL3/). That is sufficiently inexpensive that the low utilization is not a problem - it's a reasonable price to pay to solve the travel problem.

The Pudding


The proof of the pudding is in the eating, as they say. The trip is almost over, and my wife reports that the Chromebook worked well. Even as a non-technical user, she was able to get it unpacked, set up and working with minimal effort, and she has used it for ten days to complete a variety of work tasks. Not having to worry about taking a laptop was a load off her mind, and not having a laptop case to carry was a load off her shoulders.

The Chromebook is now permanently stationed in the UK for use on future trips, and travel - especially via Dubai - will be a lot easier. The whole exercise has proved yet another use case for the Chromebook, and it has turned out to be a useful addition to our business technology toolbox.

Sunday, April 2, 2017

An Infosec View of Privacy

Information security professionals, and especially cryptographers, tend to think in terms of preserving the security properties associated with information assets, and CISSP's in particular tend to start with the CIA Triad. Clearly, privacy relates to the first member of that triad - confidentiality - in some way, but the relationship is not obviously clear. For example, we often use secrecy as a synonym for confidentiality, but privacy is something different.

The difference is centered on agency or control, and in particular the relationship between the subject of the information and the information custodian.

The vast bulk of enterprise information - whether it be private enterprise, or public - is internally-generated, and the subject is, ultimately, the enterprise itself. For example, an ERP system revolves around accounting data (GL, A/R, A/P, etc.) and the ledgers therein describe the enterprise's financial state and history of transactions (as well as future revenue, of course). A CRM system may contain information about customers, but the bulk of that information relates to the enterprise's transactional history with the customer - sales calls, orders placed, etc.

In such cases, the enterprise is custodian of its own information - it is both subject and custodian. There is no conflict of interest - as custodian, the enterprise is never going to breach the confidentiality of its own information, and indeed will implement controls - policies, identity and access management, security models - to ensure that its employees and agents cannot. The enterprise, as the subject, has authority over the custodians and users of the information.

However, a conflict of interest arises when an enterprise is custodian of information about identified (or identifiable) individuals. For example, a medical practice maintains health records about patients; it is the custodian, while the patients are the subjects.

The patient records obviously have value for advertising and marketing purposes, in addition to the intended purpose of patient diagnosis and treatment. For example, a company selling stand-up desks or ergonomic chairs would see considerable value in a list of patients who have complained of chronic back pain, while over-the-counter pharmaceuticals marketers might want to sell directly to patients whose test results indicate pre-diabetes, early indications of hypertension or any of a range of conditions. And an unscrupulous marketer might approach an unscrupulous medical practice manager, resulting in patients being subjected to sales calls for products they do not necessarily want or - worse still - their medical histories or problems being leaked to other interested parties such as family members or employers.

There is a clear conflict of interest here. The subject of the data is not the custodian, and in fact, has no authority over the custodian. It is in the custodian's interest to on-sell the subject's data to anyone and everyone who is willing to pay for it. And while the example of a medical practice involves only a small business, many enterprises are much, much larger and employ many lawyers, resulting in a power imbalance between the enterprise and the affected individual.

This is why governments, acting on behalf of civil society and the individual, enact privacy legislation - the legislation gives the individual some degree of authority over enterprises and restores the balance of power.

Note that many information security controls are able to preserve confidentiality, but not privacy. Personal information is stored in databases and document management systems which are ultimately under the control of an information asset owner and users who are free to access the information for a range of purposes; if he or she decides to extract data, copy it to a USB key and sell it externally, the first two steps are probably authorized while the third cannot be detected, let alone prevented.

Hence the need for a privacy policy and strong privacy education and awareness within the enterprise. In the end, privacy comes down to personal ethics and compliance with the law. It is really a matter of trust in the integrity of those who have access to personal information - and the threat of legal action provides a degree of assurance in that integrity.

Notice that, in this model, the distinction between confidentiality and privacy can be extended beyond individual persons to companies or other entities. For example, the Chinese Wall model is another situation in which information about one entity is in the custody of another (e.g. information about clients held by a consulting firm would obviously be of great interest to other clients who are competitors). In that sense, then, the Chinese Wall model is intended to preserve privacy rather than integrity.

Finally, consider personal information in the custody of the person themselves. The subject and the custodian are the same individual - there is no conflict of interest, privacy laws do not apply, and the issue here is confidentiality, not privacy.

The distinction between confidentiality and privacy, then, is whether the subject of the information has authority over the custodian - if he does, it's a matter of confidentiality, but if he does not, then it's a matter of privacy.

Of course, there are other common conceptions of privacy, as well as legal views relating to photography, etc. but these are not considered here.

Sunday, July 17, 2016

A Little Learning - Electronic Voting and the Software Profession

Over the last week, I've had occasion to ponder the expression that 'a little learning is a dangerous thing'.

Quite spontaneously, a number of people have posted on Facebook in opposition to the idea of electronic voting. One linked to a YouTube video which clearly demonstrated the problems with some of the voting machines used in the US. Others posted a link to a Guardian opinion piece that labeled electronic voting a "monumentally fatuous idea".

The YouTube video - actually, a full-length documentary from 2006 entitled "Hacking Democracy" - showcased a number of well-known problems with the voting machines used in the US. Touch-screen misalignment causing the wrong candidate to be selected, possible tampering with the contents of memory cards, problems with mark-sense readers, allegations of corruption in the awarding of contracts for voting machines - these have been known for some years.



The Guardian article fell back on claims that "we could not guarantee that it was built correctly, even with source code audits. . . Until humans get better at building software (check back in 50 years) . . . we should leave e-voting where it needs to be: on the trash-heap of bad ideas."

However, by exactly the same argument, online banking is also a "monumentally fatuous" idea, along with chip & PIN credit card transactions, online shopping, fly-by-wire airliners, etc.

Whenever I've suggested that, actually, electronic voting is not such an outlandish idea, I've met with vehement opposition, usually supported by an appeal to authority along the lines of "I used to be a sysadmin, and let me tell you . . ." or "I'm a software engineer and it's impossible to write foolproof software. . ."

There's an old saying among cryptographers, that every amateur cryptographer is capable of inventing a security system that he, himself, is unable to break. In this case, the author of the Guardian piece demonstrates the converse - that because he, himself, is unable to come up with a system that he can't break, he is certain the professionals can't either. 


I have to say, that's a novel form of arrogance; to be so sure that because electronic voting systems are beyond your own modest capacity, nobody else can do it, either.

Yes, we've all seen software projects that were near-disasters, we've known programmers whose code had to be thrown away and rewritten, and poor practices like the storage of passwords and credit-card numbers in unencrypted form. I watch students today 'throw' code into the Eclipse IDE then run it through the debugger to figure out what the code they've just written actually does, and then I think back to my days of careful, paper-and-pencil bench-checking of Algol code, and I cringe!.  But not every system is designed and implemented that way.

It's true that electronic voting is a difficult problem, but it's one that some very fine minds in the cryptographic and security communities have been working on for several decades now. The underlying cause of this difficulty is that a voting protocol must simultaneously preserve what, at first sight, are a number of conflicting security properties.

Most security people are familiar with the basic security properties we work to preserve in the information and systems in our care: confidentiality (also known as secrecy), integrity (correctness) and availability - the so-called "CIA triad". But there are many more, and some of them, at first glance contradict each other.

For example, anonymity is important in a democracy; no-one should be able to find out how you voted so that nobody should fear any form of reprisal. For an electronic voting system, this means that the vote cannot be associated with the voter's identity, whether represented as a login ID, an IP address, a public key or some other identifying information. But democracy also requires that only those who are entitled to vote being able to do so - in Australia that means registering on the electoral roll and identifying yourself at the polling place. This property is called eligibility.

At first glance, these requirements are mutually exclusive and contradictory - how can you identify yourself to claim eligibility, while at the same time remaining anonymous? In the physical world, the time and space between your identification, the booth where you mark your ballot paper, and the random placement of the ballot paper in a box all serve to provide anonymity. Fortunately there are cryptographic techniques that effectively do the same thing.

An example is the use of blind signatures, which were originally invented by David Chaum in 1983 [1], although I'll use a later scheme, made possible by the RSA public-key system's property of homomorphism under multiplication. There are three parties: Henry, who holds a document (e.g. a vote) which he wants Sally (our electoral registrar) to sign without knowing what it is that she is signing. Finally, there is Victor (the verifier and vote tabulator), who needs to verify that the document was signed by Sally.

I don't want to delve into the mathematics too deeply, and am constrained by the limited ability of this blogging platform to express mathematics - hence '=' in what follows should be read as 'is congruent to' in modular arithmetic, rather than the more common 'equals'. Henry obtains Sally's public key, which comprises a modulus, N, and a key, e. Henry then chooses an integer blinding factor, r, and uses this with the key, e and the modulus N, along with his vote message, v to calculate a ciphertext

c = m x r^e (mod N)

and sends this to Sally along with any required proof of his identity. If Sally determines that Henry is eligible to vote, she signs his encrypted vote by raising it to the power of her private key, d:

s' = c^d (mod N) = (m x r^e)^d (mod N)  = r x m^d (mod N) (since ed = 1 (mod N))

and returns this to Henry. Henry now removes the blinding factor to get the correct signature:

s = s' x r^-1 (mod N) = r x m^d x r^-1 = m^d

Henry can now send his vote message, m, off to Victor, along with the signature, s. Victor uses Sally's public key, d, to check whether

s^d = m (mod N)

If the two are equal (congruent, really) then the signature is correct, and Victor will count the vote. Notice that Henry does not identify himself to Victor, and Victor does not know who he is; he is willing to accept that he is an eligible voter because Sally has signed his vote message. And, very importantly, note that Sally does not know how Henry voted - she signed the blinded version of his vote message.

If all that mathematics was a bit much, consider this simple analogy: if I want you to sign something for me without seeing what it is, I can fold up the paper I want to sign in such a way that the signature space is on top when the paper is placed in an envelope. I then insert a piece of carbon paper - the paper that's used in receipt books to make a second copy of a written receipt, and used to be used to make copies of typewritten documents - on top of the paper but inside the envelope. I then ask you to sign on the outside of the envelope, and the pressure of your pen will imprint your signature, via the carbon paper, to the unseen paper inside the envelope. Voila! You have blind-signed a document, which I can now extract from the envelope once alone.

My point is that cryptographers have, for many years known about, and had solutions for, the superficially contradictory requirements for eligibility and anonymity.

In practice, voting protocols have many more requirements:
  • The voter must be registered or authorized to vote (elegibility)
  • The voter must vote once only (uniqueness)
  • The vote must be recorded confidentially (privacy) and cannot be associated with the voter's identity (anonymity)
  • The vote must be recorded correctly (accuracy)
  • The voter must be able to confirm that his vote was recorded correctly (verifiability)
  • The voter must not be able to sell his vote
  • No-one can be able to duplicate votes (unreusability)
  • The vote must be unalterable after recording (integrity)
  • The voter must not be vulnerable to coercion (uncoercibility
  • The votes must not be revealed or tallied until the end of the election (fairness)
  • The electoral authority - or everyone - must know who voted and who did not
But we still have more tools at our disposal, such as zero knowledge proofs, digests, signatures and, of course, conventional symmetric crypto which we can use to build more sophisticated protocols which do satisfy these security requirements. Some very fine minds indeed have been working on this for decades now, although obviously their work is not well known outside a relatively small community of cryptographers with an interest in voting.

When the NSW Electoral Commission were working on their iVote system, they asked several universities to get involved. They supplied the design documentation for iVote, including references to the various cryptographic protocols used (e.g. Schnorr's Protocol - a zero-knowledge proof based on the discrete logarithm problem). Both I and a team from UNSW independently wrote programs which took over 250,000 votes and validated them, in a ceremony attended by scrutineers of the various political parties.

The iVote system is quite sophisticated - it's an enhancement of what's called a "split central tabulating facility" design, with a third, separate verification server which allows voters to verify their votes via a phone interface, while the counting system itself is airgapped from everything else. The process I described above compared the votes from the counting system with the encrypted votes in the verification server. The voter can additionally obtain verification that their vote was decrypted correctly via the web interface.

It's true that researchers did find a vulnerability in the SSL/TLS library on one server in the system. I'm not familiar with that part of the system, but I'm pretty sure that even if that vulnerability was exploited in a man-in-the-middle attack, the attacker would not be able to manipulate the vote in that session as the votes are encrypted in the application protocol and do not rely on the transport layer encryption of TLS. However, the browser supports TLS anyway, so it's a good idea to use it as one more layer of a defence in depth - and an alert user would expect to see that padlock in the URL bar in any case, so it would be a mistake not to use it.


Prior to getting involved with the iVote project, I'd been somewhat cynical about the prospects for electronic voting. Although I knew something of the underlying principles, I felt it was still in the too hard basket. And reports of the problems with US voting machines didn't help, although a little investigation will reveal they are almost embarrassingly primitive and bear no resemblance to the protocols I am discussing here. But they still contribute to popular fear of electronic voting systems and pollute the overall climate. Watch the video above and you'll see what I mean.

However, I discovered that work on voting protocols was more advanced than I had thought, and the implementation of the iVote system was considerably more sophisticated than most people would imagine.

But what has surprised me is the negative attitude of software professionals. The (ISC)2 Code of Ethics [2] requires me, as a CISSP, to "Protect society, the commonwealth, and the infrastructure" and as part of that effort, to "Promote and preserve public trust and confidence in information and systems". And yes, it also requires me to "Discourage unsafe practice". At this point, I cannot say that the introduction of electronic voting is an unsafe practice - over 250,000 people used in our last State election - but it's hard to promote public trust and confidence when software professionals who ought to know better are so active in promoting distrust. There's a huge gap between the quality of code produced by a junior programmer working on a mobile app with time-to-market pressures and the careful design of a voting system based on decades of cryptographic research and development, and the fact that the former may be unreliable is not an indication that the latter is inevitably insecure.

Only a fool would guarantee that an electronic voting system is 100% secure and error-free - but then, only a fool would guarantee that paper-based voting systems are 100% secure and error-free. However, humans are better at writing software than the Guardian author suggests and electronic voting system are in use today, will increase in popularity and are very unlikely to result in the downfall of democracy.

References


[1] Chaum, David. “Blind Signatures for Untraceable Payments.” In Advances in Cryptology, edited by David Chaum, Ronald L. Rivest, and Alan T. Sherman, 199–203. Springer US, 1983. http://link.springer.com/chapter/10.1007/978-1-4757-0602-4_18.

[2] (ISC)2 Code of Ethics -  The Pursuit of Integrity, Honor and Trust in Information Security, International Information Systems Security Certification Consortium, 2010. Available online at https://www.isc2.org/uploadedfiles/(isc)2_public_content/code_of_ethics/isc2-code-of-ethics.pdf