tag:tech.cyberclip.com,2013:/posts Technoclippings 2016-07-08T05:56:53Z Paul Clip tag:tech.cyberclip.com,2013:Post/341976 2011-12-10T17:24:07Z 2013-10-08T16:34:44Z iPhone. Single. Looking to make friends on any network. I'm at SFO, connected to the public wifi, and in the span of 15min have already denied my MacBook Pro Lion from connecting to over 40 iPhones and iPads. What's going on?

Being a geek, a security geek, and slightly paranoid about what's going on in my laptop, I use a wonderful little utility called Hands Off! This app enables me to control network and file operations on a per program basis. Since connecting to the SFO wifi I'm being bombarded with pop-ups like this one:

According to this site usbmuxd is a "usbmuxd: USB Multiplex Daemon. This bit of software is in charge of talking to your iPhone or iPod Touch over USB and coordinating access to its services by other applications."

Other posts link this to iTunes and iPhone/iPad synchronization. I don't own an iPhone (it's a nice device but I love my Nexus S), do have an iPad, and am not currently running iTunes. Still my laptop detects all sorts of devices on the network.

I wonder if the owners realize they're broadcasting their names loud and clear?

The next step is to connect to some of these devices to see what they say. Unfortunately I have a flight to catch!
Paul Clip
tag:tech.cyberclip.com,2013:Post/341997 2011-11-08T04:37:08Z 2013-10-08T16:34:45Z My Hammer is Better than your iPhone
(Image credits)

Since the iPhone 4S came out, I've heard that Steve Jobs wanted to destroy it, that people are so much happier on their iPhones, even my friend Garry. But guess what? Though my primary computer is a MacBook Pro and I haven't been without an iPad since their launch, I really like my Android phone, yes Android phone, a Nexus S.

Now I know a smartphone is just about the most personal piece of technology you can buy: We carry them everywhere, play with them constantly (or until the battery runs out), and fuss over them assiduously. In that light, this post isn't an attempt to prove to you that Android is better than iOS, just a desire to share some of its qualities I appreciate.

1. Keyboard. Yes, I know Siri is amazing (or not), but most of the time you'll still be typing on that tiny keyboard. On iOS, that keyboard has barely evolved in four years and it blows. On Android you can actually replace the default keyboard. My favorite is Swype, it's fast, fluid, and feels natural. It almost achieves (dare I say it) Apple-level elegance. If Swype isn't your thing, SwiftKey X most certainly will be.

2. Home screen. Android allows you to do so much with your home screen than iOS. You can embed shortcuts to apps, documents, bookmarks, and even app-specific features. Widgets make your home screens even more useful by surfacing views into apps such as calendars, tickers, weather, etc. iOS5 makes up for this a little with the updated notifications but Android's options are way more powerful.

3. The buttons. Android has four buttons to iOS's one (which now has triple click functionality, talk about overload). The Home button is there as are Back, Menu, and Search. Back is the handiest IMO, esp. its ability to cross applications. Sharing something in one app? Go ahead, then hit Back and you're returned to your original flow.As an aside, one of my biggest beefs with Android apps is that they're not designed to take advantage of these buttons: Why include a magnifying glass on the screen when there's a search button available?

4. Long presses and sharing. Long presses, the ability to pull up a contextual menu by long pressing an object on the screen, sound trivial but used well they unclutter the UI and give users handy shortcuts to functions. Sharing, a feature almost all apps... share, lets you to send data (text, URLs, tweets, pictures, etc.) from one program to another. Natural and powerful.

Android is by no means perfect and the iPhone has a lot going for it (it is, after all, a cathedral), but hopefully this post redressed the balance a little, at least until someone with a hammer comes along!
Paul Clip
tag:tech.cyberclip.com,2013:Post/342020 2011-08-19T02:51:00Z 2013-10-08T16:34:45Z Belgians Love Android

OK, you may not think so, but when our waiter put this pot of steaming mussels in front of me the other day in Belgium, I couldn't help but think: "Boy! That really looks like the Android mascot!"

Come on! You can see the resemblance right?

No?! How about now? :-)

Paul Clip
tag:tech.cyberclip.com,2013:Post/342039 2011-07-31T20:27:28Z 2013-10-08T16:34:45Z Taming Software Complexity Complexity is everywhere in our world: In our ever-growing canon of laws, in the volatile & unpredictable nature of the stock markets (esp. now with the abundance of autonomous trading systems), in our tax codeand of course in the second law of thermodynamics. It's little wonder then, as programmers the world over know, that complexity is definitely present in our software. Of all the long term threats to applications, complexity is perhaps second most critical (the first being no longer meeting user needs :-).

(Complexity can be beautiful too. Source)

Unfortunately, complexity goes hand in hand with success: the more popular an application, the more demands that are placed on it, and the longer its "working life". Left to their own devices both factors will increase complexity significantly. Life for a mildly successful app is even worse, the low demand usually results in a never-ending "maintenance mode" where poor code stewardship often abounds.

Without ways to tame complexity, any evolving piece of software, no matter how successful, will eventually collapse under the load imposed to simply maintain it.

How is software complexity defined? Many techniques have been proposed, from simple approaches such as lines per method, statement counts, and methods per class, to more esoteric-sounding metrics like efferent coupling. One of the most prevalent metrics in use today is cyclomatic complexity, usually calculated at the method level and defined as the number of independent paths within each method. Many tools exist to calculate it, at RelayHealth we've had good success with NDepend.

Identifying areas of complexity in the code base is easy. The hard part is deciding what to do about them. Options abound...

Big Ball of Mud
The "Do Nothing" approach is always worth exploring and it typically results in Brian Foote's Big Ball of Mud. Foote wrote the paper as, however noble the aspirations of their developers, many systems eventually devolve into a big, complex mass. He also notes that such systems can sometimes have appealing properties as long as their working life is expected to be mercifully short. Fate often intervenes though and woe betide the programmers stuck maintaining a big ball of mud.

Creating a big ball of mud is easy, just add code and mountain dew :-)

Let's assume that you'd like to stay out of the mud. What other options are there?

Some simple process changes can help fight complexity:
  • Analyze code metrics upon checkin and reject the new code if the files changed don't pass complexity targets (this will initially slow down development if  you impose it mid-flight but it will improve your code quality).
  • Allocate bandwidth for complexity bashing: reserve capacity such as 1 sprint every release, or a %age of total story points (e.g. 20% of all completed story points every month).
  • Temporal taming: Focus on different parts of the architecture over time, say a new area every month.
  • Something I've been wondering about: Are there processes that promote complexity? Or are some so time consuming that they prevent developers from addressing complexity?
  • Automation is a powerful tool. You can easily add exceptions to a manual process ("Oh, well if it's an app server in cluster B, then we need to run this additional program") but an automated process is a lot harder to complexify, and if it needs additional steps at least you'll know their execution will be consistent.

Complexity has spawned many solutions at the architecture / software engineering levels, though even something as basic as ensuring developers all have a common understanding of the architecture and documenting its basic idioms can go far. Other solutions are very well covered in our industry:
  • Design patterns. Tried and true approaches to common problems.
  • Aspect Oriented Programming. AOP's focus on abstracting common behaviors from the code base can reduce its complexity.
  • Service Orientation. Ruthlessly breaking up your applications into disparate, independent services reduces the overall complexity of the system. This is an SOA approach but without the burdening standards and machinery that armchair architects are prone to impose. One of my favorite examples of this approach, Amazon.com, has been using SOA since before anyone thought up the acronym. By creating loosely coupled services with standard interfaces it's much easier to update or completely replace a service compared to the same work in an inevitably intertwined monolithic application.

The most powerful weapon against the encroachment of complexity is culture: the shared conviction among developers that everyone needs to pitch in to reduce it.
  • Refactoring: developers should feel empowered to refactor code that's overly complex, not in line with the evolution of the architecture, or simply way too ugly. Two key enablers are required and both need a strong cultural buy-in:
  • A solid set of unit and other tests so the developers knows if they've broken something
  • A fast build & test cycle. Most developers like to work in small increments. Make a small change, test it. If it takes 15min for a build & test cycle, very few developers are going to refactor anything that isn't directly in their path. I really like the work Etsy has done in this area as well as culture in general by focusing on developer happiness
  • Adopt this maxim: "Leave each class a little better than when you found it". Even if it's a small change - adding a comment, reformatting a few lines of code - taken in aggregate these changes really add up over time.
  • Remove features. I heard one of Instagram's founders state that they spend a good deal of time removing features as well as adding them. That was probably a slight exaggeration, but removing features can be very powerful in terms of fighting complexity: both directly (fewer lines of code == lower complexity, except with perl ;-), and indirectly as a signal to the team and your customers.
  • What have I missed? I haven't written about complexity as the database level though, while we're on the topic, I suspect that however much I like NOSQL databases, their rise will increase data complexity in the long term. The leeway many provide developers in storing information will make it very hard to manage it: data elements will be needlessly duplicated, inconsistencies within elements will abound, etc. Error recovery will be critical, as will a strong business layer to help provide consistency.

    (Another source of complexity! Source :-) 

    Happy simplifying!
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342051 2011-06-04T23:26:43Z 2013-10-08T16:34:45Z How eBay Scales its Platform
    In these days of discussing how Facebook, Twitter, Foursquare, Tumblr, and others scale, I don't often think of eBay anymore. Yet eBay, despite its age and ugly UI, is still one of the largest sites on the internet, esp. given its global nature. So I enjoyed this QCon talk by Randy Shoup, eBay Chief Engineer, about Best Practices for Large-Scale Websites.

    Here are few lessons that caught my eye:
    • Partition functions as well as data: eBay has 220 clusters of servers running different functions like bidding, search, etc. This is the same model Amazon and other use
    • Asynchrony everywhere: The only way to scale is to allow events to flow asynchronously throughout the app
    • Consistency isn't a yes/no issue: A few datastores require immediate consistency (Bids), most can handle eventual consistency (Search), a few can have no consistency (Preferences)
    • Automate everything and embed machine learning in your automation loops so the system improves on its own
    • Master-detail storage is done detail first, then master. If a transaction fails in the middle, eBay prefers having unreachable details than a partial master record. Reconciliation processes clean up orphaned detail records
    • Schema-wise, eBay is moving away from strict schemas towards key/value pairs and object storage
    • Transitional states are the norm. Most of the time eBay is running multiple versions of its systems in parallel, it's rare that all parts of a system are in sync. This means that backwards compatibility is essential
    • "It is fundamentally the consumer's responsibility to manage unavailability and quality-of-service violations." In other words: expect and prepare for failure
    • You have only one authoritative source of truth for each piece of data but many secondary sources, which are often preferable to the system of record itself
    • There is never enough data: Collect everything you never know you'll need. eBay processes 50TB of new data / day and analyzes 50PB of data / day. Predictions in the long tail require massive amounts of data

    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342065 2011-05-26T06:27:00Z 2013-10-08T16:34:45Z Platforms as a Service Revisited
    In September 2007, Marc Andreessen wrote a thought provoking blog post describing a way to categorize different types of Platforms as a Service (PaaS). Over the years we’ve often made use of Andreessen’s levels within our Engineering team as a convenient way to discuss how we want our own platform to evolve.

    So I was surprised when I went looking for that article the other day and it was nowhere to be found on Andreessen’s blog! Fortunately I was able to rescue it from oblivion thanks to the internet archive. I’ve also uploaded a PDF of the article to Scribd.

    Andreessen’s premise is that there are three levels of internet platforms (the term Platform as a Service didn’t exist back then):
    1. At level 1 a platform is essentially a series of APIs in the cloud (another term that had yet to make its appearance) that app developers can leverage in their own apps.
    2. The prime example of a level 2 platform is Facebook. In addition to the APIs it makes available, Facebook also gives developers a way to embed their apps into its user interface.
    3. A level 3 platform achieves something the other two levels can’t: It runs developers’ apps for them. Examples here include Force.com (Salesforce.com’s PaaS) and Andreessen’s own Ning.

    As the levels go up they become easier for developers to program on and manage. A company working on a level 3 platform shouldn’t need to worry about hardware and operating systems. The categories aren’t perfect though: Amazon’s Web Services offerings are clearly level 3 (they run your code) while forcing you to still manage a virtual infrastructure.

    That said, many platforms do fit the model well. Thousands of companies offer APIs and therefore qualify as level 1. Platforms such as Google App Engine, Microsoft Azure, Heroku, EngineYard, etc. all offer flavors of level 3, some “purer” than others, I.e. with more or less hardware/OS abstraction.

    At RelayHealth we put a number of APIs at our partners’ and clients’ disposal. Some are web services, others rely on healthcare specific protocols such as HL7 or CCD riding on top of a variety of communication channels.

    Our approach to level 2 turns Andreessen’s definition inside out: instead of embedding third party apps into our UI, we make it easy for them to customize and embed our modules into their applications. This is important to many partners as building features like ePrescribing themselves is prohibitive. By providing these capabilities we enable our partners not only to deliver key features to their customers but also complete their EHR certifications (vital so their clients qualify for federal incentives).

    Regardless of your approach, if you’re building a whole platform or some simple APIs, Andreessen’s article is worth reading.
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342067 2011-04-15T06:34:00Z 2013-10-08T16:34:45Z Microsoft's Losing the Web Application War

    Netcraft's surveys of internet web servers has been a staple of the net since 1995. Eons in internet time! In those early days I fondly remember regularly checking Netcraft for updates and discussing the merits of the various web servers with friends and colleagues.

    I hadn't thought of Netcraft in years and when I suddenly remembered them the other day, I had to go check. How were the different web servers doing?

    I wasn't surprised by the rapid growth of Apache but I was surprised by the dramatic fall and subsequent slow rise of Microsoft's web server.  According to Netcraft the drop in Jan-Jun 2009 was caused by a reduction in activity in Microsoft's Live Sites.

    Looking at web server popularity in relative terms, the slow rise becomes a rapid decline in market share: from close to 40% to around 15% penetration in four years.

    It's useful to remember that there are lies, damn lies, and statistics. There could be many explanations for Microsoft IIS' relative decline. 

    One is that Netcraft is measuring incorrectly. Netcraft has been at this for a long time, so I'm going to assume they know how accurately count servers. Part of the decline is due to the Live Spaces moving to Wordpress. Clearly Microsoft doesn't view blogging as strategic. Fair enough.

    Another point to keep in mind is that Netcraft's survey is internet-focused. If they could survey intranets, I'm sure the number of IIS servers would be significantly higher.

    Still, I can't help thinking that this is yet another front Microsoft is losing ground on. And the web server is just the tip of the iceberg. Internet sites aren't choosing Apache as much as they are choosing web application stacks that use it.

    Continued loss of web application stack market share will have significant repercussions in terms revenues. Hard costs such as server and software licenses. Soft costs such as losing popularity among developers. This isn't enough of a reason to ditch Microsoft for established sites. It is a reason to think twice before going with the Microsoft stack for new projects.

    It's a shame. Microsoft's web application stack has decent technology, and Microsoft has smart engineers. They are quite capable of innovating in this space. They're just not doing so.
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342084 2011-03-15T07:03:00Z 2013-10-08T16:34:46Z Learn the Zen of a Programming Language with Koans

    I love the idea behind Ruby Koans: write a set of failing unit tests that teach you about the essence of ruby as make every test turn green. It's a brilliant idea. The tests themselves are usually simple and illustrative. You even get encouragement (or enlightenment? :-) as you fix them.

    The good news is that this idea has spread beyond ruby. There are koans in many languages:

    While learning a programming language is best achieved by writing a useful application, these koans are a very welcome (and fun!) addition.
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342106 2011-03-07T20:23:00Z 2013-10-08T16:34:46Z Hashtables in Mathematica

    I fondly think of Mathematica as a "kitchen sink language": other than that proverbial kitchen sink, it has functions for pretty much anything you can think of.

    Why then does it not have a hashtable data type?

    It turns out that it doesn't need one. Hashtables are built into the language at a fundamental level. Just start typing:

     h[foo] = 1;
     h[bar] = 2;


    And you have a hashtable!

    It's not quite that simple though. What if you want to list all the keys used in the hashtable? This function (from a handy StackOverflow answer) takes care of that:

     keys = DownValues[#][[All, 1, 1, 1]] &;
     { bar, foo }


    Recently I was playing with NOAA earthquake data in Mathematica provided in the form of a TSV (Tab Separated Values) file. Mathematica easily parses it into a list:

     ed = Import[NotebookDirectory[] <> "EarthquakeData", "TSV"]
     Take[ed, 5] // TableForm
     8204 Tsu 2009 1 3 ...
     8211 Tsu 2009 1 3 ...
     8210 2009 1 8 ...
     8250 Tsu 2009 1 15 ...


    This was a good start but the data wasn't in a very useful form. What I wanted was to be able to address the data by column name and row number, so I wrote this helper function:

     MakeHash[hash_, a_] := Module[
      {keys = First[a]},
       hash[keys[[i]], j - 1] = a[[j, i]],
       {i, 1, Length[keys]}, {j, 2, Length[a]}];
      hash[Dim] = {Length[a] - 1, Length[keys]};
      hash[Rows] = Length[a] - 1;
      hash[Cols] = Length[keys];
      hash[Keys] = keys;


    The first parameter is the name of the hash to create, the second is the array to parse (assuming the first row represents column headers). It's now easy to access the elements you want.

     MakeHash[ehash, ed]


    You'll notice MakeHash adds some convenience entries in the hashtable. I even included one for keys, despite the function we defined earlier on. It ensures MakeHash is self contained and also deals with a limitation of the keys function as it stands. As we're dealing with a two dimensional hashtable, the keys function considers each key (i.e. ID,1 and ID,2 etc.) as distinct, so returns way too many of them.

     46 (* expected *)
     2074 (* woah! *)
     (* Let's fix this by eliminating dupes with Union *)
     keys = Union[DownValues[#][[All, 1, 1, 1]]] &;

    Why 50 keys and not 46? Because MakeHash added four more: Dim, Rows, Cols, and Keys.
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342125 2011-02-25T10:50:00Z 2013-10-08T16:34:46Z Solving the Hyperpublic Coding Challenge in Mathematica
    I quite enjoy the various coding challenges that appear to be gaining in popularity. Greplin's was a lot of fun and yesterday Hyperpublic released one of their own. Since I just noticed that a number of people have posted their answers I thought I'd post mine.

    I like to use Mathematica for these challenges. Why? It's a really powerful environment, it's a lot of fun to use, I have a copy :-), and completing these challenges always teaches me more about Mathematica itself.

    Challenge 1

    In this test you're given a file that represents which users have invited others, defines an influence metric, and asks you to find the influences of the top three influencers.

    Read in the sample file
    l = ReadList[NotebookDirectory[] <> "Hyperpublic Q1.txt", String];

    Define a function that returns the positions of the Xs in a line
    CalcFriends[s] := Map[(#[[1]] &), StringPosition[s, "X"]]
    This returns a list of the positions of the Xs in a String
    E.g. CalcFriends of the fifteenth line (which represents a user with four friends) generates the indices of those friends
    {12, 23, 84, 93}
    A line with all Os (no friends for this user) gives
    { }

    Map CalcFriends over the list of lines
    f = CalcFriends[#] & /@ l

    A recursive function to calculate a user's influence
    Influence[l_List] := Length[l] + Fold[Plus, 0, Influence[f[[#]]] & /@ l]

    Now we just map Influence over the output of CalcFriends and take the top 3

    Take[Reverse[Sort[Influence[#] & /@ f]], 3]

    Challenge 2

    Here we're (essentially) asked to find the minimum of moves to achieve a target.

    For some reason, even though I knew this was a linear optimization problem, I started coding it. A mistake caused me to rethink my approach which, when you're using Mathematica, usually goes along the lines of "Why am I writing code?! I'm sure there's a function for this in here somewhere!" :-)

    Lo and behold, there was:
    FindMinimum[ {p1 + p2 + p3 + p4 + p5 + p6, 
    2 p1 + 3 p2 + 17 p3 + 23 p4 + 42 p5 + 98 p6 == 2349 && 
    p1 >= 0 && p2 >= 0 && p3 >= 0 && p4 >= 0 && p5 >= 0 && p6 >= 0 && 
    p6 ∈ Integers && p5 ∈ Integers && p4 ∈ Integers && p3 ∈ Integers && p2 ∈ Integers }, 
    {p1, p2, p3, p4, p5, p6}]

    But you should see Yaroslav Bulatov's solution for this problem, it's much more elegant.

    Fun stuff... Not only does Mathematica give you some great tools for solving problems, it also solves them fast.
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342127 2011-02-21T01:32:07Z 2013-10-08T16:34:46Z Rails3 Custom Password Validators As I was writing validators for the User class of a Rails 3 app, I wanted to make sure that people wouldn't use their names, usernames, or email addresses as passwords.
    Unfortunately I couldn't find a way to accomplish this with the built-in validators. Fortunately Rails 3 makes it easy to write your own custom validators.

    Here's an extract of my User class

    The "password => true" tells Rails to call my custom validator which, in this case, has to be called password_format.rb.

    I keep my custom validators in /lib/validators, so I need to add the following to my config/application.rb file:

    And finally the validator itself:

    (Don't forget to write the specs to test this! :-)]]>
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342141 2011-01-31T18:30:00Z 2013-10-08T16:34:47Z Mining the OSX Console for Fun and Profit

    Well... Fun? Yes (if you're a geek). Profit? Not very likely but if you search long enough who knows? :-)

    I rarely open the OSX Console app. This morning, while waiting for a call, I did and found this cry for help from Firefox:

    11.1.31 9:25:12 [0x0-0xbb0bb].org.mozilla.firefox[2734] SHOULD NEVER HAPPEN

    In fact I found dozens of them. Intrigued I started looking for other messages. Chrome, it seemed, was having problems of a more existential nature.

    11.1.31 9:34:37 [0x0-0x2a42a4].com.google.Chrome[12790] objc[12794]: Class CrApplication is implemented in both [...] and [...]. One of the two will be used. Which one is undefined.

    As long as one of them's used that's OK right?

    Pages should really know better.

    11.1.31 8:58:43 Pages[12487] *** WARNING: Method setDrawsGrid: in class NSTableView is deprecated. It will be removed in a future release and should no longer be used.


    11.1.30 17:56:04 AppleMobileBackup[8848] WARNING: Backing up bf6f8237f787cbf4206d1e107b24aacd55c44b5b

    Hmmm... Does MDRP phone home when you rip a DVD? Kind of: by default it will "Anonymously report rip statistics" but you can turn this behavior off in its preferences. I don't think this should preference should be checked by default but I like the fact that MDRP is at least logging this activity.

    11.1.30 11:07:36 MDRP[4886] reportStats: http://www.macdvdripperpro.com/cgi-bin/report.cgi?fingerprint=2359FD92-3412-CD5E-9943-D0E0A0D0&video_ts=34673437&disc=99745725&badSectors=

    The most common message I saw? Variations of this one:

    11.1.29 1:12:15 <Many Different Programs Here>[229] Can't open input server /Users/[...]/Library/InputManagers/Edit in TextMate

    It seemed as if everyone was complaining about TextMate (I know, I should move to Vim, it's on the list...). Fortunately, there's a solution.

    After scanning a day's worth of logs I decided I had better things to do!
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342145 2011-01-26T04:13:45Z 2013-10-08T16:34:47Z Tech Trends I'm Watching in 2011 The good news for us techies is that there are so many things to choose from... So, in no particular order, here are some that I'm excited about.

    Continuous Deployment / Delivery

    Different people give it different names but the concept is the same: Since you're already running continuous integration servers against your codebase to make sure they build correctly and pass automated tests, why not go one step further and deploy to production instead of stopping at the integration environment?

    Companies that follow this approach will often release to production dozens of times a day. A key goal is reducing the "wait time" for a bug fix or a new feature to be available to users from days, weeks, or even months to a couple of hours at most.

    On the technical side, continuous deployment calls for extensive automation, comprehensive unit and integration test coverage, clear procedures for making (reversible!) database updates, improved production server monitoring to detect any flaws ASAP, and the ability to efficiently stage a deployment through multiple "waves" of servers (e.g. start with 1, then 10, then 100 servers) and of course roll it back in case of issues. 

    On the business side, continuous deployment means more flexibility in how features are rolled out. While bug fixes may hit all users very quickly, Product Managers can now batch features together as needed and release them to a few clients for beta testing, before "flicking the switch" of a full deployment. Of course, all this entails tight cooperation between Development, Operations, and Product Management.

    While some will understandably be gun shy of pushing out code so quickly, I believe that companies that achieve continuous deployment will reap significant competitive advantage.


    Another example of how our world is moving from the discrete to the continuous is in development "methodologies". Long ago, waterfall gave us release cycles measured in months and years. Today Agile approaches like Scrum typically aim for a "production-ready" codebase at the end of each sprint (typically two weeks). But two weeks is too slow if you're embracing continuous deployment.

    Consequently I'm seeing more and more activity around Kanban or Kanban-like models for software development. Based on Toyota's Lean Manufacturing principle Kanban focuses on eliminating waste and improving efficiency based on (among other things) maintaining a smooth flow of features and fixes through the "development pipeline". This "pull" system is a lot more suited to rapidly responding to external changes than waterfall or even Scrum (where early sprint termination is usually fairly costly). 

    In reality, it can often make sense to combine elements of both Scrum and Kanban. In my own experience we've found Kanban well suited for operations teams, esp. when they're dealing with many emergent tasks.

    Kanban's challenge when used with continuous deployment is being able to decompose stories into small enough items. A four week work item defeats the purpose of continuous flow! :-)


    Alternatives to SQL really exploded in 2010 when NOSQL hit mainstream. Products like CouchDBMongoDBRedis, and Cassandra are powering very large sites. These databases are in some ways the descendants of the OODBs that seemed poised to break SQL's hold on our data 10-15 years ago and, more directly, related to the BigTable technology Google developed. 

    While the tech community has a well-known affinity for bright, new, shiny things and we're still learning how best to use NoSQL, there's no doubt we'll be seeing a lot of action in this space in 2011. We'll also see a backlash fueled in part by highly publicized failures like this one (though there will always be counter-examples!).

    Smart Phones

    Despite Microsoft's re-entry in the mobile space with Windows 7 Phone and Palm's impending release of webOS based tablets (oh, and RIM too), I doubt Apple's iOS and Google's Android will lose much if any market share.

    Last year Adobe implemented Flash on iOS but was blocked by Apple's strict licensing agreement, which Apple then quietly relaxed a few months later. Given that you need your apps on both Android & iOS to reach the majority of consumers I expect cross platform development environments to get more airtime in 2011. Companies include RhomobileAirplay, and appMobi (which just announced a $6M round of funding yesterday).

    In the meantime, since Android and iOS have sophisticated (and mostly compliant) HTML5 browsers,  some developers are building their mobile apps as web apps and releasing a "shell app" to both Apple and Google app stores. Netflix is a prime example of this approach.


    It's not clear to me whether we'll see new classes of security products in 2011 but what is clear is that with attack surfaces multiplying, demand will be strong. Companies are expanding their apps into the mobile space, rolling out APIs, dealing with ever increasing browser complexity, maybe even experimenting with NFC, and they often do so at the expense of security. Just as importantly, many web apps now outsource key functions to third parties (e.g. customer supportpaymentsemails...). Criminals targeting these firms could potentially affect hundreds or thousands more.

    Financial services may be where most of the money is but cyber criminals are nothing if not opportunistic. They'll keep looking for other industries to target, esp. ones that might not be so security minded. Speaking of branching out, I'm not sure if it will happen this year but with Apple's growing PC penetration it's only a matter of time before virus and trojan writers focus on OSX.

    Concerns of "mini wikileaks" within large organizations will likely increase demand for employee monitoring tools, egress filtering, comprehensive audit trails. None of this will prevent a determined data thief, but it will satisfy auditors and shareholders.

    There's more to talk about: Rails becoming the defacto startup dev platform, the problems of complex systems, and the progress towards AI, but these will have to wait for another time.
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342158 2011-01-19T07:51:29Z 2013-10-08T16:34:47Z Fusing RSS feeds together from Posterous and other blogs Posterous is a great web app: they've managed to the keep it simple and elegant to use while making it ever more sophisticated (thanks in large part to Garry Tan's leadership).

    They're also very quick to respond to questions from users:

    Why did I need such a script? My oldest domain, cyberclip.com, was registered in 1995 (back when "cyber" was cool! ;-) and went through early years of excitement and later years of neglect. A few months ago I streamlined it to a very simple design:

    Recently I thought it would be nice to add just a little more content and decided to include a list of the newest posts from all my blogs. Since Posterous doesn't supply a consolidated feed, here's the ruby script I wrote to generate one. I've only tested it with FeedBurner and Posterous but it should work with all properly formatted RSS feeds.

    I've purposely kept error handling out of this script for simplicity's sake because that's handled by the program that invokes it. That prog runs as a cron job and regens the homepage on a regular basis. It won't replace the homepage if RSSFusion errors out for whatever reason. So worst case is a stale homepage, not a broken one.

    Combining blog posts for inclusion in another page is one use, but this script could also be used to generate a fused RSS feed of multiple blogs, which is something I should probably add to my homepage as well...
    Paul Clip
    tag:tech.cyberclip.com,2013:Post/342161 2011-01-17T00:15:00Z 2013-10-08T16:34:47Z Software Architecture should be forged in Fire, not Carved in Ice

    (Picture by Jason Bolonski)

    I've seen a number of corporate environments that carve software architectures out of ice.Why ice? Because an ice architecture sure looks great: sparkling, pristine, perfect even. This approach often feels right, esp. in cost conscious, slower moving organizations. You know how it goes: spend the bulk of your time in design, figure things out properly, measure twice (or thrice), cut once, and then you're set. Sadly once carved, often the only thing left to do with such an architecture is freeze it, lest it melt. And a frozen architecture is rarely useful.

    The problem is that you're never set. Needs keep changing and if your architecture can't evolve with them, you've (best case) got a working but unmaintainable and unevolvable app, or (worst case) something that becomes unusable, even by the people who need the application the most and who are willing to put up with its flaws.

    The architecture you want is not one carved in ice. Rather, you want something you've not only heated and beaten into shape to serve your current purpose, but also a design that you can reforge in to something new as needs dictate. So how do you achieve this?

    Understand your problem space & key challenges
    I'm not advocating no design, I'm advocating just enough design. Knowing how far to go is both art and science. Two things that will help is a good understanding of the problem space and the main obstacles you'll face. If you're building a social app you need to have at least broad designs for your sharing / trust model for users, how you will distribute data and scale, what security choices to make, etc. If you don't have strong expertise in house, hire someone who does, even as a part time consultant. It will be money well spent.

    Rapid Iteration
    Focus on speed. Not at the expense of quality but at the expense of features. Build your Minimum Viable Product, get it live, get it used, and iterate. The only way to really learn what works and what doesn't is to let your users at your app. The faster your iteration the more you can adapt (reforge) to changing needs. This is one of the reasons that agile development has become the de facto development approach in the past decade. Continuous deployment and delivery are other, welcome, instances of this trend.

    Less is more
    Build what you need and improve when needed. This goes hand in hand with rapid iteration and goes against the "what if" architecture. "What if I want to go live in other countries? Oh better internationalize", "What if we need to support suppliers as well as customers? Good point, better code them in". "What if I need to scale to hundreds of millions of users?" The list goes on and the longer you let it go the slower you'll be, not just in development but in maintenance and new feature additions.

    This doesn't mean making uneducated decisions. If you think there's a good chance you'll need to go international in future, leverage a framework that supports it. But don't build it internationalization (i18n) until you're ready to use it. I'd argue that the cost to support i18n - testing multiple languages across the app, different formats & interfaces, new & altered business logic, etc. - is not worth saddling your dev team with against the day when you finally need the feature.

    Ultimately, the fewer lines of code the better. To paraphrase Dijkstra: it's not lines of code produced, it's lines of code spent!. So spend wisely.

    Test Driven / Refactoring
    If there was one software engineering practice I'd enforce, this is it. A comprehensive and robust set of tests gives you the confidence to make radical structural changes to your application and still have a working product at the end of process. I've seen business leaders question the value of putting in effort here. Understandably they're concerned that all the time spent coding tests could have been better spent coding new features.

    A counter-argument to this is to remind the business folks that, once tests are written, all the downstream QA is not only free but extremely rapid. That's when you can reassign developers to new areas confident in the knowledge that you'll detect breaks long before they ever reach end users. As your codebase grows these tests will be a godsend to help avoid spending all your time on maintenance.

    "Less is more" is your friend here too: focus on DRY (Don't Repeat Yourself). The DRYer your code, the less of it you have, so the fewer tests you'll need and the easier it will be to reforge.

    The most fundamental and so the most important. The culture of an organization is represented by people's shared values, goals, and behavior. Whether implicit or explicit, culture is the bedrock on which all else rests. The more individuals align with the culture, the more effective the team. This buy-in means that a successful culture cannot be dictated (typically by management), it must be nurtured.

    Culture will obviously vary by company but these common values will support all the principles listed above:
    • Continuous improvement: Continually striving to make things better, to achieve ever higher quality, and redefine goals as necessary 
    • Trust: Despite the best laid plans, failures happen. When they do, an organization needs to display enough trust in individuals and team to allow them to fix the problem, learn from the experience, and come out stronger
    • Collaboration: In my experience, tech and business are often in push-pull. Tech rarely gets the necessary time or resources, and business rarely gets all its desired features. The principles above drive long term value. If the culture prizes collaboration, openess, and sustained value, then neither group will want to sacrifice the short term for the long term

    Update (2011.2.3): Great article on the importance of culture from Wealthfront.


      What about...?
      What about separation of concerns, SOA, AOP, and more? All of these have their place and can certainly improve your application and architecture. Design patterns, if properly used, can make your software design more flexible and reduce the amount of refactoring needed to add new features. Still, these practices are should haves, not must haves. They can make good architecture great, but on their own won't keep it great in the long run.

      Ultimately the key to building great software is great people. Finding them, building the right culture together, and continuously evolving the organization and its processes. Over the years, new software engineering practices will come to light but these principles will evolve much more slowly.
      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342168 2010-12-30T13:22:39Z 2013-10-08T16:34:47Z Optimizing Facebook's "Hoppity" Puzzle I found Facebook's puzzle page the other day. While it has some very meaty challenges, it also has a couple of trivial ones. These easy puzzles are there to allow you to make sure you can submit solutions (I'm not consistently getting mine to run but I'm hopeful that turning off "Always send Windows-friendly attachments" in OSX Mail will do the trick).

      One easy puzzle is called Hoppity. Essential you count up from 1 to a specified number and follow these rules
      • For integers that are evenly divisible by three, output the exact string Hoppity, followed by a newline.
      • For integers that are evenly divisible by five, output the exact string Hophop, followed by a newline.
      • For integers that are evenly divisble by both three and five, do not do any of the above, but instead output the exact string Hop, followed by a newline.
      This very simple program is typically written with a few if then else's, though you could also simulate a bitmask and use a case statement:

      As soon as I'd written it I started wondering: can I optimize this? I mean, this is Facebook we're talking about. Endless scale. So if they offered a Hoppity function on the main page, you bet they'd have to make it run fast! :-)

      Looking at this program it's clear that the output should repeat every 15 counts. Here's a Mathematica plot to illustrate this where I've replaced Hoppity, Hophop, and Hop with 1, 2, and 3 respectively.

      So if you're ever interviewing at Facebook and you're given this problem (which would be surprising, I agree), you can offer this optimization to make sure the Hoppity feature doesn't take down their servers when they release it to all their members :-)

      Pre-computing is always useful!
      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342180 2010-10-28T19:26:54Z 2013-10-08T16:34:47Z External network card: Essential travel equipment
      We've traveled extensively in Europe over the past few months, and throughout these trips one of the few constants is the search for Internet connectivity. As soon as we arrive somewhere, the quest for wifi starts. Sometimes we're lucky enough to be staying with friends or at a "wired" hotel. Often though we're renting an apartment, or just in a place that hasn't seen the light yet :-)
      One solution is to visit the local Internet cafe, which is seldom practical. For one thing they're rarely close by. For another, spending time in one isn't practical: evenings are when we like to catch up with mail, tweets, blogs, and plan our upcoming activities. Carting the family off to the cafe after a long day of sightseeing is no fun.
      Our solution is to look for a generous neighbor with an open wireless access point. But to stand any chance of finding one, your laptop needs help. Its wifi capabilities just don't have the range you need. Prior to our last trip to Europe, I purchased an Alfa external wifi card with extra antenna.
      At 1,000 milliwatts and with the larger +9 dB antenna, only once was I unable to find a friendly neighborhood access point (that was in the suburbs of Paris, seems like the Parisians don't like to share, or are just more secure :-) At less than $40, this kit is now one of my "must bring" items.
      • Installing and using one of these cards is a piece of cake on Windows (just follow instructions and use the drivers on the included CD, or download the latest from the web) but getting it to work on OSX takes a little more work, see below
      • Walk around the premises as you look for that open access point, you'll detect different networks as you go from room to room
      • Set your card on a window sill (i.e. with the window open) for even better reception, it will make a noticeable difference
      • Be courteous: don't start downloading huge amounts of data, watching YouTube, etc. You're getting free internet access, don't be a pest
      • Be careful: you never know who else is listening to traffic on this network. Use HTTPS wherever possible. I heartily recommend Firefox with the HTTPS Everywhere extension (which isn't really everywhere, but it's a lot better than not using it)
      • I've passed through airport security with the +9dB antenna and no one's made an issue of it (rightly so but you never know what's going to tweak airport security these days...)
      • Added bonus: this card works great with BackTrack
      As I mentioned above getting the RTL8187L (that's the chipset in the card) drivers working on OSX 10.6 is a little more involved than Windows but I've successfully installed them on two MacBook Pros. These instructions come from http://www.insanelymac.com/forum/index.php?showtopic=208763
      1. Download RTL8187L driver for Mac OS X 10.5 fromRealTek
      2. Install it, including restarting. Ignore the error about the kext not being installed poperly
      3. Open /Applications/Utilities/Terminal, and type the following commands in order:
        1. cd /System/Library/Extensions/
        2. sudo chown -R root:wheel RTL8187l.kext/
        3. sudo chmod -R go-rwx RTL8187l.kext/
        4. sudo chmod -R go+rX RTL8187l.kext/
        5. sudo kextutil -t -v /System/Library/Extensions/RTL8187l.kext
      4. Agree when it pops up and tells you that there's a new network interface that's been added.
      5. You should then be able to open the /Applications/Realtek USB WLAN Client Utility and configure it to connect to your network.

      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342187 2010-08-31T02:29:30Z 2016-07-08T05:56:53Z Rails3 Mind Map I find mind maps useful for many purposes. The process of clearing your screen, fullscreening your mind mapping tool, and immersing yourself in a topic of interested is a great brainstorming exercise.

      Today, my goal wasn't creativity, it was to build a map of main components of Rails 3. You'll likely find the PDF more useful as its nodes are clickable and refer back to the Rails API (and github in a couple cases where I found documentation to be more useful).

      This isn't a comprehensive map. Let me know what I've missed.

      (This map was created with MindNode Pro, an easy to use, cheap, and high quality OS X app, proving that sometimes you can have your cake and eat it :-)

      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342192 2010-08-30T08:00:00Z 2013-10-08T16:34:48Z No More Excuses! Using RVM to Play with Rails 3
      Now that Rails 3.0 is out, it's high time to start using it. But what if you want to keep Rails 2.x around for your current projects? Fortunately, on OS X, there's a simple solution: RVM.

      Once you've installed RVM, you'll need to install a version of ruby compatible with Rails 3. There are two choices: 1.8.7 and 1.9.2. Given its new features and speed improvements, 1.9.2 is the one to choose, unless you have particular dependencies on 1.8.7.

      Installing 1.9.2 is simple: rvm install 1.9.2. This will download, compile, and install 1.9.2 to a .rvm folder in your home directory.

      Once that's done, type rvm 1.9.2 to switch over and rvm info to confirm that you're now running 1.9.2. Note: this will only apply to the current terminal window, here's how to make it the default.

      Type gem list and you should see just two gems: rake and bundler.

      Now go ahead and install Rails 3: gem install rails. Confirm by way of rails --version and gem list.

      That's it, you're done... Now have fun!

      What to go back to your previous version of ruby? Just type: rvm system and you'll revert back to your standard ruby installation and the gems that went with it.
      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342196 2010-08-15T23:17:00Z 2013-10-08T16:34:48Z Steps to coax The Gimp into compiling under Macports

      Macports is a great tool to easily install a ton of open source software on your Mac. The packages are generally very well maintained but that can be a little tough for very complex ones like The Gimp, the best (?) open source image processing software out there. Though you can download a prebuilt version of The Gimp, I wanted to build my own in order to leverage some third party plugins, like Tilt Shift photography.

      If you haven't yet, install Macports.

      Then kick things off with "sudo port install gimp". This will likely generate a very impressive list of dependencies and start chugging through them. Expect this to take hours, not minutes. Hopefully, it will build cleanly all the way through. If not, the following notes may be of use... 

      My first failure (bug #26084) was quickly resolved thanks to the lightning support of the Macports maintainers (kudos to ryandesign [at] macports.org). Turns out I was a building a slightly out of date package. Following the advice to "sudo port selfupdate" and "sudo port clean libgnomeui", the build proceeded smoothly.

      The next build failure was already captured by bug #25962. There are three errors listed, though I only saw the first two. These were easily addressed with a couple of "ln -s" commands as described in the bug. I also installed select_python ("port install python_select") and pointed it to python2.6 as described in the comments.

      The final bug occurred when building gimp itself. You can check out #26095 and its simple fix.
      Paul Clip
      tag:tech.cyberclip.com,2013:Post/341986 2010-08-13T00:28:00Z 2013-10-08T16:34:44Z Web API Lifecycles and Hypecycles

      I've long been fascinated by the explosion of APIs over the past years, captured by the excellent ProgrammableWeb site.

      Curious about how categories were evolving over time, I mined ProgrammableWeb's index for interesting patterns. I focused primarily on categories with at least 50 APIs, dividing them up into semesters from the second half 2005 to the first half of 2010. One important detail to be aware of: the PW index includes the last modified date of the API, not its creation date. So think of these graphs as a measure of activity in a particular category. For example an API may have been created in 2006 but if it was updated in 2010 it will count towards that last bar on the graph.

      So what's hot? Social APIs, unsurprisingly, show a feverish activity: every site is busy creating or expanding their offerings in this space.

      Enterprise APIs too are seeing a lot of movement.

      Encouragingly, so is Shopping. A harbinger of an economic turnaround, or just wishful thinking? :-)

      What about up-and-coming API categories to watch? Of the ones with over 30 APIs Travel and Utility have seen the most movement over the last year and a half.

      Here are the remaining 13 categories with 50 or more APIs. Other strong performers include GovernmentTelephony, and Tools. Categories in relative decline? Reference and Video

      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342000 2010-08-11T02:45:00Z 2013-10-08T16:34:45Z DTerm: Useful Omnipresent Command Line for OS X
      Found this sweet free utility called DTerm recently. It enables you to pull a context-aware pop-up that you can use to run command line utilities from whatever program you're currently using. What do I mean by "context-aware"? DTerm will automatically change directories to the one your program is currently in. Moreover, for those of us using multiple spaces, any programs you run from DTerm will open their windows in your current space.

      Here's an example. Say I want to package up a bunch of images. Simple: hit Shift-Cmd-Return to invoke DTerm, its window overlay on the Finder's, and I can then run a "tar" command. That's it. I could even stay in DTerm and copy pics.tgz to a different drive, or scp it to another server.

      This is a very useful tool. Here are a few other things you can do with it:
      • Quick calendar: "cal" will display this month's calendar, hit Shift-Cmd-C and you'll have it in your clipboard (cal 2010 will give you this year's calendar)
      • Starting TextMate: Typing "mate ." from a Finder window will open TextMate in project mode in the current directory
      • Comparing files: Select 2 files in the Finder, run DTerm, type "cmp" or "diff" then Shift-Cmd-V to paste the names of the files you selected into DTerm
      • MD5 checksum: Select the files you want to sum and run "md5" + Shif-Cmd-V
      • Info on all files in the current directory, including hidden ones: "ls -al"
      • Create a series of folders in the current directory: much faster to type "mkdir foo bar foo/bar" than to use the Finder
      • Quick lookup info on a domain: "dig google.com"
      • Want your mac to read you something? Select some text, copy it, invoke DTerm and type "say" followed by pasting the text surrounded by quotes
      • Byte, word, line counts: "wc" and the file(s) you're interested in

      Not all these examples require DTerm's features but having a terminal window at your fingertips, without needing to switch context, is very useful.

      And it's another reason to make better use of all those command line utilities!

      Hat tip to @azaaza for the pointer.
      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342002 2010-08-02T03:04:17Z 2013-10-08T16:34:45Z Defcon Day Two Highlights If there was a theme to the presentations I saw on Saturday, it's that as a technology is increasingly closed, its security decreases exponentially. The solution is sunlight: bring the products and their vulnerabilities out in the open. Yes, it does mean running the risk of vulnerabilities becoming known. But it's the only solution we've found that actually produces fixes. An obscure, insecure product helps only the black hats.

      Insecurity Engineering of Physical Security Systems: Locks, Lies, and Videotape by Marc Weber Tobias, Tobias Bluzmanis, Matt Fiddler
      A good example of this was a talk by three locksmithing experts. Though their preamble was too long, the main part of presentation was fascinating. They showed how to break five different types of locks: from a re-keyable mechanical lock to a fingerprint reading lock. All were defeated with simple attacks, some so simple that they beggared belief. The fingerprint reader, for example, has a standard bypass lock in case the battery runs out of the reader... With the insertion of the paperclip in the bypass lock, it opened like a charm. Wired has a great writeup, including videos.

      Extreme-range RFID Tracking and Practical Cellphone Spying by Chris Paget
      Chris gave two great presentations. The first showing how to read RFIDs at ranges of a couple hundred feet. The second focused on how to build your own GSM base station. Both talks were full of technical information and Chris did a good job at clearly walking us through the steps he'd taken. The GSM talk was fascinating. In essence, it is surprisingly easy not just to create your own base station (cost ~$3,000) but it's also trivial to spoof an existing carrier such as AT&T. When audience cellphones connected, Chris' fake tower would instruct them to drop encryption (a fact that handsets don't advertise to their users BTW) enabling the capture of phone conversations. While this currently only worked for outbound calls, it was still an impressive demonstration. One solution? Switch to 3G, it's a lot more secure than 2G.

      We Don't Need No Stinkin' Badges: Hacking Electronic Door Access Controllers by Shawn Merdinger
      This pres was a good example of the evils of security by obscurity. Electronic door access control is ubiquitous throughout the business world, yet these systems are usually run by building management. These folks may know a lot about physical security, but not information security. The result? Vendors supplying shockingly insecure systems that are never patched. Shawn focused on a product by S2 Security but claimed many competitors also had flaws such as insecure default configurations, full access to nightly database backups, an unprotected URL to reset the device to factory defaults, leveraging vulnerable software components, etc. etc. etc. Basically, if your company's door access controller is on an (internal hopefully!) network, you had best isolate it as much as possible. To my knowledge Shawn hasn't uploaded his pres anywhere here are the four S2 CVEs he submitted.

      You're Stealing It Wrong! 30 Years of Inter-Pirate Battles by Jason Scott
      A lighter look at the history of pirate groups and much much more. Scott, a computer historian and Defcon regular, gave a highly entertaining presentation and provided a wonderful trip down memory lane for many an audience member (myself included!). We gave him a standing ovation at the end of his speech (something I've rarely seen at Defcon). Jason, make sure you come back next year. Oh, and if you, dear reader, have old computer stuff you want to get rid of... Don't! Send them to Jason instead.

      Malware Freak Show 2: The Client-Side Boogaloo by Nicholas J. Percoco and Jibran Ilyas
      These two gents from Trustwave demo'ed four examples of malware found at client sites over the past year. Five years ago, they said, attackers focused on "smash and grab": find a vulnerability, exploit it, get as much info as you can, get out. Nowadays attackers are writing custom targeted malware that stays under the radar, allowing them to slowly infiltrate their victims' networks. Not sure what their sample size was but they claimed that on average malware infiltrates a site for 156 days before being detected. That's a long time.

      Jackpotting Automated Teller Machines Redux by Barnaby Jack
      Arguably the most talked about presentation at Black Hat and Defcon, Jack blew the doors wide open on ATM security. There are a lot of articles about his talk on the net, so I won't repeat it here. Jack basically found a number of vulnerabilities in these Windows CE devices (yes, Windows CE), including a remote exploit allowing him to reprogram the ATM. One of the most dramatic moments of his pres came when, in a matter of seconds, he popped open an ATM (cabinet master keys are apparently trivial to obtain), inserted an SD card with his own code, and power cycled the machine. Once the ATM booted you can see what appeared on the screen below and watch the video to see what happened next!

      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342024 2010-07-31T07:48:30Z 2013-10-08T16:34:45Z Defcon Day One Highlights While a few of Friday's talks contained little new, original, or useful information (disappointingly the former Facebook CSO's talk was particularly inane), the majority of the presentations were interesting. A few were eye-opening. Here are some short summaries of my favorites.

      Crawling Bittorrent DHTs for Fun and Profit by Scott Wolchok
      Scott presented his research on creating a very comprehensive database of Bittorrent Distributed Hash Tables. Suffice it to say that his approach and findings will unfortunately prove very useful to record companies if they aren't already using these techniques. File sharers beware!

      The Law of Laptop Search and Seizure by the EFF legal team
      This talk focused on what law enforcement can and can't do (but may still try to get away with!) when seizing your laptop. There were a lot of details presented... orally. EFF, why no presentation? A few key points from my notes (oh, and in case you hadn't realized: IANAL!)
      • In general law enforcement can't just take your laptop and search it, your rights are protected by the fourth amendment
      • If law enforcement does want to search your laptop they need a warrant or you need to fall in a exception category such as: you have a public share on your computer, you're sharing via P2P, you've given consent, there's immediate danger that you might destroy the info, etc.
      • You can revoke consent at any time (i.e. if you first let law enforcement look at your laptop, you can change your mind)
      • If there are multiple users of a computer, any one of them could give consent, though courts have recognized that this consent only goes so far as the authorizing user has access (though the forensic tools they use make no such distinctions... Beware!)
      • All searches that occur at a border are considered reasonable. No suspicion is needed for any searches to occur, nor is a warrant needed (in other words: your rights go out the window!)
      • You cannot be forced to give over your encryption keys, courts have found that this is a fifth amendment right, and the gov't hasn't appealed this decision
      • Remote Computing Services, e.g. online backup or file sharing (like the very useful Dropbox). It is very easy for the gov't to get this data. They just need a subpoena, sometimes not even. Probably cause isn't required, since searching these cloud-based files often is how the gov't shows probable cause. They're not required to notify you within a reasonable time frame
      • Electronic Communication Services, e.g. online mail services like gmail. Your data is only protected for the first 180 days. After that the gov't doesn't need a warrant to get access to this info. However the gov't doesn't think this law applies to emails you've read, drafted, and sent. This is being appealed and the DoJ is fighting it. The EFF, ISPs, and others are trying to get a better law passed, maybe next year (the sooner the better!)
      • The EFF's advice: POP your mail, don't leave it in the cloud, and avoid online backups if possible

      Lord of the Bing: Taking Back Search Engine Hacking from Google and Bing by Rob Ragan and Francis Brown
      The most interesting talk of the day. These guys have taken google search engine hacking to a whole new level. Very creative. Sadly I haven't found their presentation online but the tools they wrote are. One of my favorite sections focused on combining google hacking with custom searches into a massive RSS feed for real time updates of vulnerable sites crawled by google. I'm sure we haven't heard the last of this...

      Weaponizing Lady GaGa, Psychosonic Attacks by Brad Smith
      Brad is an excellent speaker and by far the most entertaining of the day. He discussed the uses and misuses of psychosonics: the generation of (generally undetectable) sound patterns designed to alter a target's state of mind. One of the funniest parts of his speech came when he listed the top 10 sonic torture songs... :-)

      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342042 2010-07-30T18:48:44Z 2013-10-08T16:34:45Z Defcon: Hackers have many interests Two pictures of books for sale here at Defcon. The first one is what you'd expect security minded geeks to read. The second shows some wider ranging topics...

      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342069 2010-07-30T09:55:45Z 2013-10-08T16:34:45Z Hacking the Defcon 18 Badge Since its 14th edition, Defcon badges have gone electronic. Hardware wizard Joe Grand (he and I both worked at @stake a long time ago, though in different offices) creates these masterpieces and unleashes them on the thousands of people who descend upon Las Vegas every year for this oldest of the US hacker conferences, now in its 18th incarnation.

      Befitting this conference, the badges have all sorts of hidden capabilities, easter eggs, etc. One of Defcon's many challenges is to find these backdoors. This year's badge is no exception. Sporting an LCD panel for the first time ever, pressing the badge's buttons causes all sorts of cryptic (and some not so cryptic) behavior.

      One of the badge's challenges is to crack "Ninja mode" which you have to enable by picking an electronic lock consisting of fifteen tumblers, each one with three states (for a total of over 14million combinations).

      I had fun with this one. I was making slow, steady progress until I thought of exploring the Defcon CD... Bingo! Joe was thoughtful enough to include a full development environment for the card, as well as the source code to the firmware! From that point "hacking" became a simple exercise in reverse engineering the code. I won't give the key away but I will say that Wolfram|Alpha proved very useful for quick conversions between binary, trinary, and hexadecimal.

      In retrospect I should have looked at that CD much earlier :-)

      Paul Clip
      tag:tech.cyberclip.com,2013:Post/342071 2010-07-27T06:21:13Z 2013-10-08T16:34:45Z 27" iMac Electricity Consumption Stats I pulled our handy little Killawatt out from its resting place this week and used it to track our 27" iMac's (quad core i7 processor) electricity usage. The Killawatt plugs in between the wall socket and the device you want to monitor. It will calculate cumulative power consumption (Kwh), watts, amps, etc. The device is particularly useful in figuring out how much an electric appliance, or computer, really costs to run.

      27” Apple iMac
      Kwh / Day
      Cost / Day
      On, screen dark
      On, light usage
      On, max usage

      "Max usage" means all CPUs were chugging away and a DVD was playing. Cost / day is based on my current cost of just under $0.12/Kwh.

      Overall that doesn't feel too bad, though it can add up over a year. If you were compressing videos 24x7 non-stop for a whole year it would cost you over $210 (and would probably seriously reduce the lifespan of your iMac to boot :-)
      Paul Clip