Sunday, July 28, 2013

Nexus 7 - One Year On

This time last year, I was given a Nexus 7 as a birthday gift (I'd hinted really, really strongly!). One year on, Google has released an updated model, and I have a lot better understanding of how the thing works and what it's good at. The new Nexus 7 hasn't been released in Australia yet - but will I upgrade?

I think so. I've come to regard the N7, along with the Roomba, as one of my two most successful "Let's give this a shot and see what it's all about" tech purchases. However, the N7 hasn't achieved this position on the basis of its technology or bang-for-the-buck alone; its significance was to introduce me to the Google ecosystem and I should, perhaps, give a close runner-up award to the Galaxy Nexus phone which I bought as a result of my positive experience with the N7.

I haven't used the N7 as a toy at all. Never watched a movie on it, rarely play music on it, will never play a game on it (I'm not a gamer, unless you count me vs the evil Java compiler as some kind of strategy game).

For me, it's all been about personal organization and having instant access to information wherever I happen to be - in my office, in the kitchen, in front of the TV, in a lecture theatre, in the coffee shop. The apps I use most heavily would be Gmail (I have two business and one university accounts), Google Calendar, and Google Maps, along with Google Drive/Docs/Apps. The latter, especially, has been getting heavy use for writing up course materials and presentations - I do a lot of the heavy lifting on my desktop machines or on a Chromebook I also bought in the "Let's give this a shot and see what it's all about" mind-set, but it's been really useful to have ability to view materials while away from my desk, or to display them on a second (really fourth!) screen while working.

Then there's Evernote, which has also been getting heavy use, especially for mundane things like shopping lists. However, with Google Keep maturing and being standard in Android 4.3, it might take over for those lightweight tasks.

Perhaps the biggest unexpected "killer app" is Google Now, which integrates voice search against the Google Knowledge Graph with before-demand presentation of information cards to organise my day.

And then there's a whole host of other information-handling apps: Wikipedia, Youtube, IMDB for when I'm watching movies (always nice to be able to answer "What else have we seen him in?"), the Guardian for my twice-daily news fix - plus, of course, go41cx and free42 for calculations. And the Kindle app has proved especially useful while dining alone in dimly-lit restaurants recently.

Where has the N7 fallen down, and what would I like to fix? The only thing I would change would be to get a 3G/HSPA+/LTE model next time. Although the N7 is not as dependent on an always-on connection as the Chromebook, and I can get a wi-fi connection all over the university campus, there are times when the longer battery life and larger display of the N7 has made it a better choice for navigation and some other tasks than the Galaxy Nexus phone, which is my "always-connected" device. I suspect that quite a low-cost, low-bandwidth prepaid SIM would be more than adequate, so it needn't break the bank.

The other thing I want to investigate is yet another case, this time with a bluetooth keyboard. While the standard Google keyboard's swype input technique really is quite usable, a more capable keyboard would hugely improve the usability of Evernote and similar apps.

All things considered, I think I'll be queuing up for a 32G/HSPA+ N7 when they finally make it to Australia.

Sunday, May 19, 2013

Thanks for Nothing, Verisign

For, lo, these many years I have used a VeriSign "Class 1 Individual Subscriber - Persona Not Validated" personal X.509 certificate for several purposes. I originally got mine back in The Day, when Netscape offered a web mail service but required that you log in using a personal X.509 certificate for authentication. Not surprisingly, Netscape's web mail service didn't take off, even in competition with the horror that was Hotmail.

But the personal certificate has other uses, too - mainly in conjunction with email. You can use it to sign emails, using the S/MIME protocol. Mail clients like Thunderbird automatically append your certificate to signed emails, making it easy to verify the signature - just calculate a hash over the email message, decrypt the signature using the public key in the attached certificate, and if the two match, the message wasn't modified and it was sent by whoever has the private key that matches the certificate. Very easy, even for completely non-technical users.

And once you've received someone's certificate, the email client automatically places a copy in your certificate store. Since the certificate contains their public key, you can now encrypt emails that you send to them (assuming that you also have a private key and certificate. It's almost foolproof.

But now comes the $64,000 question: how does the recipient of a signed email know that they can trust the public key in the attached certificate? Of course, that's the whole point of a PKI (Public Key Infrastructure): you can trust the certificate to identify an entity because the certificate is itself signed by a Certification Authority. Of course, for an inexpensive certificate like the Class 1 Persona Not Validated ones, all it really means is that the person who bought the certificate was able to collect it via a link in an email sent to them by the CA, proving that they can access the email postbox.

But to check the validity of the certificate signature itself, we need the public key of the CA. No problem - browsers and email clients have those public keys built in, in the form of self-signed root certificates. We explicitly trust the root certificate, and given that, we know we can trust any certificates signed by it (actually, signed by the private key that corresponds to the root, but the terminology gets pretty lax, as we'll see).

And so that is how it has been, since time immemorial. Each year, I have renewed my certificate and dutifully backed it up from Firefox, then imported a copy into Thunderbird and used the new one.

Last year, I even taught the details of this as part of a university class on Cryptography and Information Security, and I set the students an exercise - get yourself a free Class 1 personal certificate from Comodo, send me a signed email, exchange signed emails with other students, start sending each other encrypted emails, then obtain my certificate from the VeriSign Digital ID directory (or directly import it from a file) and send me an encrypted email. It all went swimmingly well, with no problems.

This year, lots of problems - albeit a good opportunity for learning, for my students. First of all, I forgot to upload my new certificate, and the old one had obviously expired. But even when I exported and uploaded the new, current, certificate, they still couldn't import it into Thunderbird. Typically, they would get an alert dialog:


This is not cool. Now, I knew the certificate was perfectly valid. I even opened it using the Windows Crypto Shell Exensions on my desktop machine, and here's what it looked like:


Looks fine to me. Clicking on the "Certification Path" tab reveals that the certificate isn't directly signed by a VeriSign root certificate - rather, it's signed by an intermediate CA certificate and that one is signed by the root:


 But here's how it looks on my Windows 7 machine:

What's this? "... not enough information to verify this certificate"? Let's have a look at the "Certification Path" tab:


 The issuer of the certificate could not be found. Hmm. Remember, I'm looking at the Windows certificate store, here, not the Firefox or Thunderbird one - they were fine because I'd been using that cert with Thunderbird on that machine for the best part of a year with no problems. But it looked as though the Windows certificate store was missing one or other, or both, of the Verisign CA certificates. So I went into the Firefox Certificate Manager and exported both the required certificates, noticing as I did so that the VeriSign Class 1 Public Primary Certification Authority - G3 certificate was a Builtin Object Token (distributed as part of the browser) while the VeriSign Class 1 Individual Subscriber CA - G3 certificate was stored in the Software Security Device - i.e. it had been imported separately.


Now, it was over to the Windows 7 machine, and the Certificates MMC snap-in. To run this, use Start -> Run, type in "mmc.exe" to start MMC, then use File -> Add/Remove Snap-in, Select "Certificates" and add to "Selected snap-ins", select "My user account" and click OK. A quick inspection revealed that the VeriSign Class 1 certificates weren't there, but right-clicking almost anywhere and choosing "All Tasks" -> "Import..." allowed them to be imported successfully.


Once this is done, my personal certificate shows up correctly on the Windows 7 desktop:


Now, the students were experiencing problems importing my certificate into Thunderbird, but I bet it was the same problem - missing Verisign root and intermediate signing certificates. I've sent a couple of them the certificates as described above, and I've also sent others my certificate exported from Thunderbird using the "X.509 Certificate with chain (PEM)" format, which puts all three required certificates into a single .crt file. I'm waiting to hear back from them, but I'm expecting success.

All this is complicated by some terminological inexactitudes in the industry and the software used. One has to be careful to distinguish between Firefox/Thunderbird's "Backup...", which exports both the certificate and the corresponding private key in PKCS #12 (.p12) format, which displays in a Windows folder as a certificate with a key in front of it. You don't want to give anyone else this file, as it contains your private key (albeit password protected). It's mainly useful for exporting your key and certificate from Firefox and importing it into Thunderbird or other applications. So, a "certificate file" may not contain just a certificate! (This is almost as egregious an error as the old man page for OpenSSH, which used the word "certificate" to mean "private key"!)

On the other hand, if you want to give someone else your public key, in the form of a certificate, you have to "View..." the certificate, click on the "Details" tab and then choose "Export...". At this point, one can choose PEM (.crt - appears as a certificate) or PKCS #7 (.p7c - appears as a rolodex card) format, with or without the certification chain.

But the real problem is that VeriSign - now part of Symantec - appears to be backing, slowly, and without notifying their customers, out of the personal certificate or Digital ID business. They are no longer distributing their Class 1 root and intermediate certificates with Firefox or Thunderbird, or with Windows. What's worse, my students have not been able to download my certificate from the Verisign Digital ID directory. It's there (as are records for my previous certs going all the way back to May 1997!) but there are no links or buttons to do anything with it. No-one can download it, and it looks as though I'm not going to be able to renew it - although without distribution of the CA certificates in email clients, it's of dubious value anyway.

By contrast, Comodo not only offers free Class 1 personal certificates, but also operates the SecureZIP Global Directory where you can place your certificate for use with the PKWare SecureZIP utility. And their Class 1 CA certificates are more widely distributed than Verisign's, making them less prone to problems.

So, if you've been experiencing problems with your VeriSign Class 1 certificate, perhaps you now know why. And if, like me, you've been paying for your certificate for the best part of 20 years, you'll join me in saying:

Thanks for nothing, VeriSign.

Sunday, April 21, 2013

2013: The Year of the Facebook Mobile Attack?

Facebook has been pushing - if you don't update, you'll receive notifications in your newsfeed - a new version of the Facebook app for Android. I've reluctantly upgraded the version on my Nexus 7, but I'm holding off installing it on my phone. At this point, I'm not sure the increased risk is worth it.

"What risk?", I hear you ask. There's a potential exposure in the new Facebook app; the app requires somewhat looser permissions than the previous version, including - wait for it - the ability to directly call phone numbers. Big red flag here, Facebook. The major form of malware seen to date on Android phones has been apps that use this permission to call premium-rate international numbers, running up a huge phone bill for the victim and delivering a nice profit for the attacker.

Properties required by the Facebook app for Android -
notice "direct call phone numers"

The need to make phone calls arises from the introduction of the new "Facebook Home" - an app which takes over the home screen of a phone to present a Facebook-centric experience - as well as Facebook Messenger, which integrates Facebook messaging with SMS as well as supporting voice messaging. It's not clear to me why the main Facebook app, which does not support these functions, should also require access to the phone functionality, not to mention the ability to record audio, download files without notification, read your contacts and many other privacy-invading permissions.

At the same time Facebook has been a terrific vector for the spread of malware on the PC, sometimes in the form of infected videos or apps, as well as privacy-invading apps which harvest your profile, contacts or other information or download files.

The message: expect this to spread rapidly to mobile devices. Facebook now exposes a relatively large attack surface, and an attacker who can compromise the Facebook app on Android can use its permissions in a range of creative ways.

2013: the year of the Facebook mobile attack? I hope not, but it looks likely to me.

Sunday, April 14, 2013

Google+: The Good, The Bad and The Ugly

I've recently introduced a group of online friends to Google+. We'd mostly met via Facebook, where we'd shared things via a secret group, but disenchantment set in and the group was fractured when some of our number were locked out of their accounts (the reasons for that are not at all straightforward and I won't go into them here).

So a few of us were chatting about how to get around this, and off the top of my head I quipped, "We ought to set up a similar group as a Google+ community". Then I thought, "why not?" and a minute later, I'd done it.

I spent the day intermittently writing short "How-To" posts for the new users I was dragging across from Facebook, and answering their questions, helping them to figure out how to get things done, etc. It's been a couple of days and the experience has given me a better understanding of Google+

Neither Good Nor Bad - Just Important


Circles. You have to grok circles. Circles have both read and write, or in and out, functionality. You can use circles to filter what you see in your home page - for example, you can suppress a circle from appearing in your Home page stream (great if they are prone to posting NSFW images!). That's the "read" functionality. You can also limit posts to only certain circles so that your doings are not broadcast to the wrong people - that's the "write" functionality, which will be more important to some people (to be honest, I regard anything that I post on a social networking site as public).

The problem is that the importance of circles, and the things one ought to consider when creating them and adding people to them, are not immediately obvious - it's only after you've spent some time fiddling with the various configuration options that their importance becomes apparent.

The Good


Now to some good points I've noticed and others have commented on.

Firstly, the Home page has filtering, so you can view just specific circles. Across the top of the home page are buttons for "All", "Friends", "Following", or whatever circles you've created. This means you can choose to see only posts from colleagues during the workday, then spend some time catching up with friends, or reading up on products/technology you're following.

The integration with Gmail, Contacts, Youtube, Blogger, etc. is nice - but only important for users who have already engaged with the Googleverse. It's good for me - I use Google Apps for both business and university purposes, and it was that that led me to get my Google+ profile sorted out and then start using it - but for people looking for a Facebook alternative, the fact that you might have to use some other Google service such as Google Drive to get things done seems odd.

There are some nice usability features; for example, you can drag and drop pictures directly into the "Share what's new..." comment box - there's no need to click on "Add Photos/Video" first. However, on the down side, sharing URL's requires you to click on a link button to get a field, rather than auto-recognising the URL in your text. And Google+ doesn't automatically provide previews of URL's in comments like Facebook does.

The privacy and security options are very granular; this is great if you're willing to take the time to learn and use them. Not every is willing, though - and it can be confusing for the new user, who doesn't know what all these things are.

Communities are essentially equivalent to Facebook groups, and can be made public with no barriers to joining, public with approval for joins, or secret, which will require an invitation to join. A nice feature which Facebook doesn't have is "Categories"; for example, I quickly created a a "Using Google+ And This Community" category where I could post hints and answer questions without overwhelming the main "Discussion" category. Of course, the default view when one logs in is "All posts", which displays everything - and it takes the new user some time to discover and use categories. Until they do, they post everything in the default "Discussion" category and (under "Bad") there's no way for moderators to move posts to the correct category.

It's quiet. I've given up on Twitter; it's been over-run by social media "marketers" who think they're slick, and aren't. Facebook is rapidly heading the same way; my newsfeed is starting to fill with posts from link farmers trying to trap people into granting access to their Facebook profiles. Google+ doesn't have that, as far as I can see. Yes, there are marketers there - I follow a couple of my favourite brands - but so far, it's a pretty well-behaved place.

The Bad


But there are problems, and it's been obvious as I've introduced these new users.

It's noisy. By default, every post, every comment on a post, every damn thing that happens, fires off an email. There's a notifications on/off button in communities, but that doesn't seem to do much to quieten things down - instead, you have to go to your profile,

Configuration options and settings are spread out in various places, mostly accessible from your Profile, via the gear-wheel icon at top right. Some options are under "Profile and Privacy" (https://www.google.com/settings/privacy) - for example, you can control which people appear in the "People in his/her circle" listing on your profile, on a circle-by-circle basis if you want. But other settings, such as just what "Your Circles" means when you share something with "Your Circles", and the email/SMS notification noise level, are under "Google+" (https://www.google.com/settings/plus). It all gets rather confusing, especially for the new user.

Another big issue is the lack of group chat functionality. Just like Facebook, there's a "Chat" tab at lower right of most pages, but unlike Facebook, you can't add multiple people to the conversation. Googling "Google+ group chat" leads to articles that imply it's possible, but the software has obviously changed since they were written. And the confusion over Google's IM products don't help, there's Google+ Chat, Google Talk, Google Messenger and Google Voice, and they're all different things. In fact, it seems that two different things on different platforms (PC vs Android) can even have the same name even though they're incompatible and not interoperable.

If you really want a multi-way conversation, Google+ pushes you towards "Hangouts" which offer up to 10-way videoconferencing and have some really neat features such as screen-sharing, etc. However, not everyone has a webcam, or even a microphone, or they don't want to be seen. And Hangouts require special software; when you start a hangout (or try to join one?) without the software, you are prompted to download GoogleVoiceAndVideoSetup.exe. The messages seem to imply that the software has installed itself; however I soon discovered that it hadn't, and when I found and ran GoogleVoiceAndVideoSetup.exe, it downloaded and installed the actual code required. At this stage, no-one else in our little group seems to have completed the process and so we haven't actually accomplished a Hangout. If we do, we might well move this feature to the "Good" side of this balance sheet.

File sharing is difficult. Facebook groups have a "Files" tab and even an "Add file" link right at the top of the page. There's nothing like this in Google+ communities. The easiest way to share something seems to be to upload it to Google Drive, make it public and accessible to anyone who has the link, then copy and paste the link into a Google+ post. This is awkward at best, and it also means that the file is stored in an individual user's Drive, rather than storage space that belongs to the community. At the very least, the Share... menu option in Google Drive ought to have options for sharing to Google+ - that functionality already exists in Youtube and could almost be copied and pasted into the Google Drive code base.(Update: it turns out that there may be a button which allows direct sharing to Google+ [or email, Facebook or Twitter], but I don't see it because I'm using the Google Apps version of Google Drive. Just another complication - different people see different versions of the same thing, depending upon which Google services they're signed up for.)

Terminology keeps changing. For example, the term "stream" has fallen into disuse - your "stream" is now your "Home page". And I've already mentioned the confusion over the IM apps.

Functionality keeps changing and is inconsistent. Google+ - and the rest of the Googleverse - is obviously in a constant state of change and flux. New functionality is constantly appearing while older and less-used - but popular with its users - features are liable to disappear. I need only mention Google Reader at this point - but it's an issue I'll return to.

Related to this is the fact that while Google is positioning Google+ as the central hub of their applications and services, at least for identity and profile management, it is not very good as a user-centric dashboard. As one of my friends pointed out, iGoogle was much better for that - but it's due for end-of-life later this year. It's a great pity - Google needs something that provides a single page with widgets for Gmail, Calendar, Contacts, Google+, etc. Ironically, I realised that's what my home screen on the Nexus 7 provides - it would be wonderful if Google could provide a web page that could run the same widgets as Android devices. How about it, Google?

The Ugly


Now we're down to cosmetics - the kind of thing that a bit of CSS fine-tuning could probably fix

Google+ doesn't seem to fit as much information on the page as Facebook does. I say, "seem", because on close inspection they both use the same font size for the main text of posts. Google+ puts its major app icons down the left column while Facebook lists groups, apps and pages there; scrolling up, Facebook shifts it up, leaving empty white space. Over on the right, Google+ lists more "stuff" you might like while Facebook puts a scrolling "ticker" app, which is dense with a smaller font and less white space.

Part of the reason for the less dense appearance of Google+ is its use of boxes around posts and grey shading. Facebook's all-white page is much cleaner looking. Google+ could really use a makeover from a good designer.

Summing Up


Overall, the impression one gets is that Google+ is "geekier" - it's stronger and more innovative on the back end server functionality. There are lots of configuration options, but Google annoys most first-time users by not setting appropriate defaults - there are far too many email notifications and the privacy settings probably aren't set high enough for most users, requiring a good half-hour or more of stumbling around, changing things by trial and error.

I believe that Google+ is going to grow and get better - as more and more users acquire Android devices or switch to using Google Apps and Gmail, they will be assimilated, and the functionality will be refined. But for now, it's still rough round the edges and a bit abrasive for the user switching over from Facebook.

Monday, April 1, 2013

Much Ado About DNS Amplification Attacks

There's been much wailing and gnashing of teeth from one or two people over DNS amplification attacks, following an over-hyped DDoS attack on Spamhaus using this technique. The attack relies on sending DNS requests with the source IP address spoofed to be the address of the victim, which is swamped by comparatively large reply datagrams, Here are two techniques to make sure that your systems can't be used by Bad Guys to conduct these attacks.

For years now, in my CISSP Fast Track Review Seminars, I've been advocating the use of reverse path filtering in routers and firewalls. In fact, it's an Internet Best Practice - see BCP 38 [1]. It's implemented in the Linux kernel and many distributions turn it on by default. On Red Hat Enterprise Linux, CentOS or Scientific Linux, for example, take a look at the /etc/sysctl.conf file, looking for the following lines near the top:

# Controls IP packet forwarding
net.ipv4.ip_forward = 1

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1


If you change that to:

net.ipv4.conf.default.rp_filter = 2

you have solved the problem - before forwarding a packet, the kernel essentially asks itself, "If I was sending a reply to the source address of this packet, would I send that reply back out the interface that I received this packet on?". If the answer is no, the packet is dropped. So, for example, if a packet with a source address on your internal network arrived on the external interface, it would be dropped.

If your distro does not use the sysctl.conf file, you can achieve the same effect with the following command in a startup script such as /etc/rc.d/rc.local:

echo 2 > /proc/sys/net/ipv4/conf/default/rp_filter

The default value of 1 enables reverse path filtering only of addresses on directly connected networks. This is a safer option - full reverse path filtering can break networks which use asymmetric routing (e.g. the combination of satellite downlinks with dial-up back-channels) or dynamic routing protocols such as OSPF or RIP.

However, reverse path filtering really needs to be implemented by all ISP's, to stop datagrams with spoofed source addresses from getting anywhere on the Internet. For those of us who aren't ISP's but just operate our own networks, a better fix is to make sure that your DNS either does not support recursive lookups, or supports them only for your own networks.

If your DNS is intended only as a primary or slave master for your own public zones, and will therefore be authoritative, then just edit the named.conf file to set the global options:

options {
     allow-query-cache { none; };
     recursion no;
};


However, if your DNS will provide recursive lookups for your internal machines, then restrict recursive lookups like this:

acl ournets {203.35.0.152/29; 192.168.0.0/21; };

options {
        directory "/var/named/data";
        version "This isn't the DNS you're looking for";
        allow-query { ournets; };
        allow-transfer { ournets; 139.130.4.5; 203.50.0.24; };
        allow-recursion { ournets; };
};


(Replace the network addresses in the ournets acl with your own addresses, obviously.)

The allow-transfer directive restricts zone transfers, and you would normally only allow slave DNS's (e.g. those provided by your ISP) and perhaps a few addresses within your own network - I've allowed transfers from all addresses in ournets, so that the dig command can be used for diagnostics. The allow-recursion directive allows recursive lookups only from our own machines

Finally, the allow-query directive means that only your own network(s) can even query this DNS - if you need to allow queries of your public zones, you can allow that in their specific options, later:

zone "ourcorp.com.au" IN {
        type master;

        file "db.ourcorp.com.au";
        allow-query { any; };
};


Should you choose to go even further, there are even patches for BIND which allow you to rate-limit responses, so that you can provide protection for your own addresses against DNS amplification attacks.

The bad news is that if you are running Windows, the only option that you have is to completely disable recursion - the Windows DNS is originally based on really old BIND code and does not have most of these options.

Implement these two simple fixes, and you can be confident that your systems won't be part of the problem.

References:


[1] BCP 38: Network Ingress Filtering - Defeating Denial of Service Attacks which employ IP Source Address Spoofing - available online at http://tools.ietf.org/html/bcp38

[2] US CERT Alert TA13-088A, DNS Amplification Attacks. Available online at http://www.us-cert.gov/ncas/alerts/TA13-088A

Sunday, March 24, 2013

High Kernel CPU Usage - Grrr!

My poor old desktop machine, Sleipnir, is much abused and overloaded. It's maxed out, with 3 GB of RAM (max usable for 32-bit XP),  300 GB IDE C: drive, and 1.5 TB and 2 TB drives for use with my SageTV software for DVD images and recorded TV shows, respectively.

For a few weeks now, the poor old thing has been dragging her feet. Everything was slow; menus would take many seconds, even a minute, to appear, programs were slow to load, and once RAM was fully committed, any switching of programs that involved the swap file - and with Firefox's memory leaks, that usually didn't take long to occur - was painful.

I didn't think too much of it; it's well known that Windows machines degrade over time. I've always put it down to registry rot, coupled with Microsoft's unholy alliance with hardware manufacturers that gives them an incentive to drive users to replace their computers frequently.

But it got to be a major Pain In The Ass. My work was slowing down; Eclipse was dragging along and even simple edits were getting to be painful. Worse still, TV recordings were becoming corrupted. Sleipnir contains three TV tuners, and we rely on the SageTV software to automatically record TV shows so that we can watch them at a convenient time. Downstairs, our main TV has a Sage HD-300 extender which allows us to view recordings or live TV, and we count heavily on this to allow us to watch our favourite shows when our workload allows. In fact, the TV won't work without it as there is no external antenna and simple rabbits-ears don't get a usable signal in that location - my upstairs office has much better reception.

However, now both recorded and "live" TV was jittering, dropping out and downright corrupted. At the least, there were occasional ear-shattering chirps; at worst, shows were just unwatchable. The pressure was on to either replace the computer or get the problem fixed.

So I did a little hunting around. Sleipnir is so heavily loaded that I routinely run the Task Manager to keep an eye on it, and it was already obvious that the CPU Usage display was showing most of the time spent in the kernel. At the same time, the hard drive activity light was solidly on. Hmmm. Disk activity involving lots of CPU? That shouldn't be happening. (You have to imagine me stroking my chin, thoughtfully at this point). Usually, disk I/O is handled by the DMA Controller, which transfers sectors (or more) directly from the disk controller buffers into main memory with no CPU intervention. The CPU hasn't been involved since the good old days of ...

PIO! Programmed I/O - where the processor itself enters a loop to transfer data, word by word (it used to be byte by byte, in the old days) from the disk controller into main memory.

Could it be? Opening Device Manager (from within the "My Computer" properties) and examining the "Primary IDE Channel" properties, "Advanced Settings" tab soon revealed that yes, indeed! - the "Current Transfer Mode" as set to "PIO" rather than the expected "Ultra DMA Mode 5". It turns out that if Windows experiences 6 or more CRC (Cyclic Redundancy Check) errors while reading a drive, it degrades the DMA mode setting, eventually getting to zero and then reverting to PIO mode. This won't actually help anything - the problem is with the disk drive, not the controller - but of course, the CPU is now having to work hard during disk transfers and it slows everything right down.

IDE Properties - if the "Current Transfer Mode" is "PIO" you're in trouble.
Simply setting the "Transfer Mode" to "DMA if available" won't reset things. Rather, you have to click on the "Driver" tab and - yes, this is correct - uninstall the driver. This is a considerable leap of faith, especially considering that this is Windows we're talking about here, people. In fact, you have to uninstall the driver on all IDE channels, and then reboot.

On rebooting, Windows will produce "New hardware discovered" messages and will reinstall the drivers. It did for me, and it should for you, too. If you haven't uninstalled the driver on all channels, then you'll probably find it's still running in PIO mode on the problematic channel. If you have uninstalled the driver on all channels, you might have to reboot yet again.

If this works for you, you should be back on the air with a decently-performing machine. It certainly worked, in my case. However, it's probably only a matter of time before the problem arises again - if there were six CRC errors on a drive, it may well be failing. In my case, I have a spare 320 GB IDE drive on the shelf - being a hardware hacker, I have spares for most things - and so I'll take care to back up anything vital and swap drives when I get time. All my work is stored on a server with RAID array and offsite backup or backed up to multiple machines and in the cloud anyway, and my iTunes library is also backed up to a pair of external hard drives rotated weekly to an off-site location. So I'm willing to sit and wait, in the interests of seeing how long it takes for the required six CRC errors to accumulate.

In the meantime, everything is so much snappier. The "All Programs" menu appears in less than a second rather than anything up to a minute, and I can watch TV while recording three programs simultaneously and running Eclipse, Thunderbird and Firefox.

Life is good again!

Thursday, October 18, 2012

And So It Comes to Pass

Back in January, annoyed by the number of people wanting a password lock built into the Kindle - an idea that is frankly naive and problematical - I sat down and wrote what I thought Amazon ought to do, based upon my experience working in security and e-commerce. It became quite a long blog article, which can be found here: "If *I* Was Amazon".

Well, blow me down - they've only gone and done it!

Amazon Whispercast is Amazon's back-end administration tool for organizational users of the Kindle, such as schools, colleges and companies - but it looks as though it would work for families as well. Many of the features will appeal to organizations deploying the new models of Kindles - especially the Kindle Fire HD and Fire HD 8.9" models - such as automatic configuration of wi-fi network connections, VPN configuration, ActiveSync with Exchange servers, etc.

But the basic ability to centralize book-buying, organize users into groups and automatically deploy books to the Kindles is very like the "parental control" requirement. And there's also the ability to create named policies which selectively block access to the Kindle store, block access to social networks, etc.

All in all, it looks like Amazon have done a lot of work on the back end, just as I predicted. I can't wait to check it out, in depth.