Tag Archive: javascript

Sep 20 2016

Book review: Beautiful JavaScript edited by Anton Kovalyov

beautiful_javascriptI have approached JavaScript in a crabwise fashion. A few years ago I managed to put together some visualisations by striking randomly at the keyboard. I then read Douglas Crockford’s JavaScript: The Good Parts, and bought JavaScript Bible by Danny Goodman, Michael Morrison, Paul Novitski, Tia Gustaff Rayl which I used as a monitor stand for a couple of years.

Working at ScraperWiki (now The Sensible Code Company), I wrote some more rational code whilst pair-programming with a colleague. More recently I have been building demonstration and analytical web applications using JavaScript which access databases and display layered maps, some of the effects I achieve are even intentional! The importance of JavaScript for me is that nowadays when I come to make a GUI for my analysis (usually in Python) then the natural thing to do is build a web interface using JavaScript/CSS/HTML because the “native” GUI toolkits for Python are looking dated and unloved. As my colleague pointed out, nowadays every decent web browser comes with a pretty complete IDE for JavaScript which allows you to run and inspect your code, profile network activity, add breakpoints and emulate a range of devices both in display and network bandwidth capabilities. Furthermore there are a large number of libraries to help with almost any task. I’ve used d3 for visualisations, jQuery for just about everything, OpenLayers for maps, and three.js for high performance 3D rendering.

This brings me to Beautiful JavaScript: Leading Programmers Explain How They Think edited by Anton Kovalyov. The book is an edited volume featuring chapters from 15 experienced JavaScript programmers. The style varies dramatically, as you might expect, but chapters are well-edited and readable. Overall the book is only 150 pages. My experience is that learning a programming language is much more than the brute detail of the language syntax, reading this book is my way of finding out what I should do, rather than what it is possible to do.

It’s striking that several of the authors write about introducing class inheritance into JavaScript. To me this highlights the flexibility of programming languages, and possibly the inflexibility of programmers. Despite many years of abstract learning about object-oriented programming I persistently fail to do it, even if the features are available in the language I am using. I blame some of this on a long association with FORTRAN and then Matlab which only introduced object-oriented features later in their lives. “Proper” developers, it seems, are taught to use class inheritance and when the language they select does not offer it natively they improvise to re-introduce it. Since Beautiful JavaScript was published JavaScript now has a class keyword but this simply provides a prettier way of accessing the prototype inheritance mechanism of JavaScript.

Other chapters in Beautiful JavaScript are about coding style. For teams, consistent and unflashy style are more important than using a language to its limits. Some chapters demonstrate just what those limits can be, for example, Graeme Roberts chapter “JavaScript is Cutieful” introduces us to some very obscure code. Other chapters offer practical implementations of a maths parser, a domain specific language parser and some notes on proper error handling.

JavaScript is an odd sort of a language, at first it seemed almost like a toy language designed to do minor tasks on web pages. Twenty years after its birth it is everywhere and multiple billion dollar businesses are built on top of it. If you like you can now code in JavaScript on your server, as well as in the client web browser using node.js. You can write in CoffeeScript which compiles to JavaScript (I’ve never seen the point of this). Chapters by Jonathan Barronville on node.js and Rebecca Murphey on Backbone highlight this growing maturity.

Anton Kovalyov writes on how JavaScript can be used as a functional language. Its illuminating to see this discussion alongside those looking at class inheritance-like behaviour. It highlights the risks of treating JavaScript as a language with class inheritance or being a “true” functional language. The risk being that although JavaScript might look like these things ultimately it isn’t and this may cause problems. For example, functional languages rely on data structures being immutable, they aren’t in JavaScript so although you might decide in your functional programming mode that you will not modify the input arguments to a function JavaScript will not stop from you from doing so.

The authors are listed with brief biographies in the dead zone beyond the index which is a pity because the biographies could very usefully been presented at the beginning of each chapter. They are: Anton Kovalyov , Jonathan Barronville, Sara Chipps, Angus Croll, Marijn Haverbeke, Ariya Hidayat, Daryl Koopersmith, Rebecca Murphey, Danial Pupius, Graeme Roberts, Jenn Schiffer, Jacob Thorton, Ben Vinegar, Rick Waldron, Nicholas Zakas. They have backgrounds with Twitter, Medium, Yahoo and diverse other places.

Beautiful JavaScript is a short, readable book which gives the relatively new JavaScript programmer something to think about.

Oct 30 2015

Analysing LIDAR data for the UK

I’m currently between jobs for a couple of weeks, so I have time to play with data.

The Environment Agency (EA) has recently released it’s LIDAR data for England amounting to several terabytes of the stuff. LIDAR is a laser ranging technology which gives you the height profile of the surface under inspection. You can get a feel for the data from this excerpt of central Chester:

SJ46-Chester-512x512

The brightness of a pixel shows the height of a feature, so the race course (lower left) appears dark since it is a low flat region close to the River Dee. The CWAC HQ building is tall and appears bright. To the north of the city are a set of three high rise flats, which appear bright. The distinctive cross-shape of the cathedral, with it’s high, bright central tower is also visible. It’s immediately obvious that LIDAR is an excellent tool for picking out the footprint of buildings.

We can use the image above to make a 3D projection view where the brightness of a pixel is mapped to height:

Chester-3D

The orientation for this image is the same as that in the first image, the three tower blocks are visible top right, and the CWAC HQ visible lower left.

The images above used the lowest spatial resolution data, each pixel is 2mx2m. The data have released have spatial resolutions 2m down to 25cm for selected areas. Looking at the areas with the high resolution data available it becomes very obvious what the primary uses of the data are: flood and coastal defences.

You can find the LIDAR data here. It’s divided up into several datasets. Surface data gives height information including all objects on the land such as buildings, trees, vehicles and so forth whilst Terrain data is processed to remove these artefacts and show the pristine land surface.

Composite data are data compiled to give maximum coverage by combining data from surveys conducted in different years and at different resolutions whilst Tile data are the underlying raw data collected in different years and different resolutions. The coverage sliders show the coverage of each dataset. The data are for England only.

The images of Chester shown above are an excerpt from a 10kmx10km tile, shown below:

SJ46

Chester is on the left of this image, above the dark bend of River Dee flood plain. To the right hand side we can see the valley of the River Gowy, and its tributaries – features which are not obvious on the ground or in Google Maps. The large black area is where there is no data, smaller irregular black seem to correlate with water, you might just be able to pick out the line of the Shropshire Union canal cutting through the middle of the image.

I used Chester as an illustration because that’s where I live. I started looking at this data because I was curious, and I’ve spent a happy few days downloading data for lots of different places and playing with it.

It’s great to see data like this being released under permissive conditions. The Environment Agency has been collecting this data for its own purposes, and it’s been available from them commercially for a while – no doubt as a result of a central government edict to maximise revenue from it.

Opening the data like this means the curious can have a rummage, and perhaps others will find a commercial value in it.

I’ve included a few more images below. After them you can see the technical details of how to process these data and make the visualisations for yourself, the code is all in this GitHub repository:

https://github.com/IanHopkinson/defra-lidar-viewer

It is shared under the MIT license.

Liverpool in 3D with the Radio City tower

Liverpool-3D

Liverpool Metropolitan Cathedral at 1m resolution

Liverpool-Metropolitan-cathedral-3D

St Paul’s Cathedral

StPauls-3D

Technical Details

The code used to make the figures in this blog post can be found here:

https://github.com/IanHopkinson/defra-lidar-viewer

The GitHub repository contains a readme file which describes the code, and provides links to the original data, other useful commentary and the numerous bits of code I borrowed from the internet.

The data start as sets of zipped text file archives, each archive contains the data for a 10kmx10km OS National Grid square – Chester is in the SJ46 cell. An archive contains a maximum of 100 text files, each one containing data for a single 1kmx1km square, the size of this file depends on the resolution of the data. I wrote a Python program to read the data for a 10kmx10km cell and convert it into a PNG format image. This program also calculates the bounding box in latitude and longitude for the cell. The processing program works fine for 2m and 1m resolution data. It works just about for 50cm data but is slow and throws memory errors. For 25cm resolution data it doesn’t yet work.

I made a visualisation using the leaflet.js library which allows you to overlay the PNG images generated above onto OpenStreetMap maps. The opacity of the image can be varied with a slider so that you can match LIDAR features to map features. The registration between the two data sources is pretty good but there are systematic problems which I believe might be due to different mapping projections being used by the Ordnance Survey and OpenStreetMap.

map-overlay  

A second visualisation tool uses the three.js library to make an interactive 3D view. The input data are manual crops of approximately 512×512 from the raw PNGs, I did this using Paint .NET but other image editors would work fine. Larger images work but they are smoothed to 512×512 in the rendering. A gotcha here is that the revision number of the three.js library is important – the code for this visualisation leant heavily on previous work by others, and whilst integrating new functionality it was important to use three.js source files from the same revision. This visualisation allows you to manipulate the view with the mouse, it takes while to load up but once loaded it is pretty fast. Trying to upload a subsequent image doesn’t work.

3D-view

I’m still working on the code, I’d like to be able to process the 25cm data and it would be good to select an area from the map and convert it to 3D view automatically.

Aug 21 2015

The London Underground – Can I walk it?

caniwalkitThere are tube strikes planned for 25th August 2015 and 28th August 2015 with disruption through the week. The nature of the London Underground means that it is not all obvious that walks between stations can be quite short. This blog post introduces a handy tool to help you work out “Can I walk it?

You can find the tool here:

http://www.caniwalkit.co.uk/

To use it start by selecting the station you want to walk from, either by using the “Where am I?” dropdown or by clicking one of the coloured station symbols (or close to it). The map will then refresh, the station you selected is marked by a red disk, the stations within 1.5 miles of the starting station are marked by an orange disk and those more than 1.5 miles away are marked by a blue disk. 1.5 miles is my “walkable” threshold, it takes me about 25 minutes to walk that far. You can enter your own “walkable” threshold in the “I will walk” box and press refresh or select a new starting station to refresh the map.

The station markers will show the station names on mouseover, and the distances to the starting station once it has been selected.

This tool comes with no guarantees, the walking distances are estimated and these estimates may be faulty, particularly for river crossings. Weather conditions may make walking an unpleasant or unwise decision. The tool relies on the user to supply their own reasonable walking threshold. Your mileage may vary.

To give a little background to this project: I originally made this tool using Tableau. It was OK but tied to the Tableau Public platform. I felt it was a little slow and unresponsive. It followed some work I’d done visualising data relating to the London Underground which you can read about here.

As an exercise I thought I’d try to make a “Can I walk it?” web application, re-writing the original visualisation in JavaScript and Python. I’ve been involved with projects like this at ScraperWiki but never done the whole thing for myself. I used the leaflet.js library to provide the mapping, the Flask library in Python to serve the data, Boostrap to make it look okay and Docker containers on Digital Ocean to deploy the application.

The underlying data for this tool comes from Open Street Map, where the locations of all the London Underground stations are encoded as latitude and longitude. With this information in hand it is possible to calculate the distances between stations. Really I want the “walking distance” between stations rather than the crow flies distance which is what this data gives me. Ideally to get the walking distance I’d use Google Directions API but unfortunately this has a rate limit of 2500 calls per day and I need to make about 36000 calls to get all the data I need!

The code is open source and available in this BitBucket repository:

https://bitbucket.org/ian_hopkinson/london-underground-app

Comments and feedback are welcome!

Jun 26 2013

Posting abroad: my book reviews at ScraperWiki

It’s been a bit quiet on my blog this year, this is partly because I’ve got a new job at ScraperWiki. This has reduced my blogging for two reasons, the first is that I am now much busier but the second is that I write for the ScraperWiki blog. I thought I’d summarise here what I’ve done there just to keep everything in one place.

There’s a lot of programming and data science in my new job , so I’ve been reading programming and data analysis books on the train into work. The book reviews are linked below:

I seem to have read quite a lot!

Related to this is a post I did on Enterprise Data Analysis and visualisation: An interview study, an academic paper published by the Stanford Visualization Group.

Finally, I’ve been on the stage – or at least presenting at a meeting – I spoke at Data Science London a couple of weeks ago about Scraping and Parsing PDF files. I wrote a short summary of the event here.

datavisualization_andykirk javascriptthegoodparts1 machinelearningcover interactivevisualisation natural-language-processing-with-python

 

rinaction

May 09 2013

Book review: Interactive Data Visualization for the web by Scott Murray

interactivevisualisation

This post was first published at ScraperWiki.

Next in my book reading, I turn to Interactive Data Visualisation for the web by Scott Murray (@alignedleft on twitter). This book covers the d3 JavaScript library for data visualisation, written by Mike Bostock who was also responsible for the Protovis library.  If you’d like a taster of the book’s content, a number of the examples can also be found on the author’s website.

The book is largely aimed at web designers who are looking to include interactive data visualisations in their work. It includes some introductory material on JavaScript, HTML, and CSS, so has some value for programmers moving into web visualisation. I quite liked the repetition of this relatively basic material, and the conceptual introduction to the d3 library.

I found the book rather slow: on page 197 – approaching the final fifth of the book – we were still making a bar chart. A smaller effort was expended in that period on scatter graphs. As a data scientist, I expect to have several dozen plot types in that number of pages! This is something of which Scott warns us, though. d3 is a visualisation framework built for explanatory presentation (i.e. you know the story you want to tell) rather than being an exploratory tool (i.e. you want to find out about your data). To be clear: this “slowness” is not a fault of the book, rather a disjunction between the book and my expectations.

From a technical point of view, d3 works by binding data to elements in the DOM for a webpage. It’s possible to do this for any element type, but practically speaking only Scaleable Vector Graphics (SVG) elements make real sense. This restriction means that d3 will only work for more recent browsers. This may be a possible problem for those trapped in some corporate environments. The library contains a lot of helper functions for generating scales, loading up data, selecting and modifying elements, animation and so forth. d3 is low-level library; there is no PlotBarChart function.

Achieving the static effects demonstrated in this book using other tools such as R, Matlab, or Python would be a relatively straightforward task. The animations, transitions and interactivity would be more difficult to do. More widely, the d3 library supports the creation of hierarchical visualisations which I would struggle to create using other tools.

This book is quite a basic introduction, you can get a much better overview of what is possible with d3 by looking at the API documentation and the Gallery. Scott lists quite a few other resources including a wide range for the d3 library itself, systems built on d3, and alternatives for d3 if it were not the library you were looking for.

I can see myself using d3 in the future, perhaps not for building generic tools but for custom visualisations where the data is known and the aim is to best explain that data. Scott quotes Ben Schniederman on this regarding the structure of such visualisations:

overview first, zoom and filter, then details on demand

Older posts «