Next up on my reading list: Spook Country by William Gibson. This completes — years later, and read out of order — his post-911 trilogy (along with Pattern Recognition, the first, which I read first, and Zero History, the last). This suffered from Gibson’s usual outdated spy-cool and brand-name-dropping, and his penchant for odd technological whiffs, but I think it was the best of the three: besides and despite his flaws, it displayed his talent to build a gripping story (especially in the second half of the book), with realistic and engaging characters. I still think that the Sprawl trilogy was his best, but this was a good bedside companion for a week or so.
This is the introductory post for a hopefully four-part series about using QGIS to find the shortest path between two points, not shortest as the crow flies, but following a given network of roads. This is called routing, it’s what’s Google Maps and other mapping software uses, and it relies on graph theory and network analysis to do its job. I’ll talk about the what and the why of this little experiment here; the how (for three different versions of how) will be the subject of subsequent posts.
UPDATE: Part 2 can be found here.
The reason I’m looking at all this goes back to my interest in cycling tourism, and my attempts to identify cycling-accessible amenities — convenience stores, restaurants, hotels, that sort of thing — along the Lehigh Towpath. My first attempt (you can find it here) basically looked at a region, within a mile (as the crow flies) of one section of the Lehigh River, and searching within that region for the amenities I was interested in. That was an interesting project in its own right, but, as I said in my earlier post, it didn’t really solve the right problem: there are many places within a mile, or even a quarter mile of the river, that are not anywhere near accessible from the towpath: they may be on the wrong side of the river, say, or not near a towpath access point. To be considered accessible, the points of interest would need to be within a mile (or whatever arbitrary distance I end up choosing), by road, of an access point on the towpath.
I didn’t really have a plan to make this happen yet, but with or without a specific plan, I figured my first order of business was to get the information I would use. That previous analysis used Google Maps, but I felt that their data was a bit encumbered (in terms of my rights to it), and it seemed that Google didn’t play as well as I’d like with QGIS anyway, so I decided to use the data available through Open Streetmap, for both the road network and the set of amenities. (I already had a collection of the towpath access point locations left over from a previous experiment.) I got those sets of data, and massaged them so that I only had the parts that fell within a mile of the bike paths in the Lehigh Valley. This gave me the data seen to the right, where the aqua lines are the road network, the red lines are bike trails (the towpath, plus the Palmer Bike Path), the red stars are trail access points, and the orange dots are the amenities (restaurants, fast food etc).
(One note about the road network: You probably can’t see it at this resolution, but I made a point of excluding roads that are not practical/legal/safe for cycling, like US-22, I-78 and a few others. There are also a number of places, like the New Street and Hill-to-Hill Bridges, where roads or the trail are connected via stairways to the bridges above; after our own struggles, a few years ago, with stairs and fully loaded touring bikes at the Ben Franklin Bridge, I decided to also exclude stairways from my network.)
So that gets us the data, what about the analysis? My first thoughts were to see if I could find all the points on the road network that were a mile away from an access point, then connect the dots to define a region, and then find all the amenities within that region. My second thoughts were that this approach would put me back in the same situation as my first attempt, since I could easily find roads that were not reachable within that region, such as bridges. (Bridges became my nemesis for a while.) I eventually decided that my best strategy would be to find the shortest route between each access point and each amenity, and select from the amenities based on the lengths of the routes I found.
To perform the actual routing analysis, I have three options:
- the GRASS networking tools available through the QGIS GRASS plugin
- PostGIS with the pgRouting extension
- the Network Analysis library available through the QGIS Python interface
In terms of a learning curve, I have some experience with networks in GRASS, and I feel at least a little comfortable with Python (and copy-paste, with scripts I find online), so pgRouting will probably be the most difficult for me to pick up. Meanwhile, the Network Analysis library can use the data I already have, but Open Streetmap deals with road networks in a way that’s not directly compatible with either GRASS or pgRouting — their topological models are different, but that’s an issue for a future post. I would have to either re-import the road network to get it to work with pgRouting, or further process the one I have for GRASS.
Each one of these approaches will be the subject of its own post. Given that the Python approach is not the hardest, and my data is already in the form I’d need, I am going to try my hand with the Network Analysis library first. Stay tuned for Part 2, whenever…
I still have no idea what’s going wrong with scanning photographs of QR codes (other than, say, generic image quality issues inherent in the process), but I’ve sort of abandoned the whole QR thing. The obsession ran its course, and there was also this:
We went out last weekend with John and Donna, and also a friend of ours who is a programmer. She asked me about my recent projects and I said I was intrigued with QR codes, and she said something to the effect of “Oh, aren’t they a bit passé?”
What?!?? I asked John, and he also felt that they were a technology that seemed promising maybe a few years ago, but eventually the buzz faded as they were seen to be superfluous — users could write information (or capture the info another way, like near field communication) as easily as they could use a phone to scan and capture it from a QR code.
I went home and did a little Googling and — except in the marketroid world where it definitely seemed passé — the situation wasn’t nearly as dire as the picture my friends painted, but what I saw online did make me reevaluate their usefulness, to take stock as it were, and my interest, already waning, disappeared.
I’m not sure why I did this exactly, but the other day I decided to download a QR code generator onto my phone. I have no real need, but it looked like a fun thing to play with, so I made a few codes (my contact info, “Hello World!” etc), then I thought it would be pretty cool to read and write them from the laptop, so I downloaded a program called qrencode to write them, and one called zbar to read them, and I had a bunch of geeky fun using all my new toys.
Then I got the idea: what if I could take a picture of a QR code, with datestamp and GPS metadata added? I could then extract the QR data, and the time and place it was gathered, like maybe something an inventory program would use. I downloaded another program called exiftools, and found how to get the date/time and location from the photos, but the final step, extracting the QR data from the photo of the QR code image, was a failure. I have no idea why yet.
We saw it the other day, basically as soon as it was out in a nearby theater. We happened to go on a weekday matinée, which is what we usually do, but unlike other matinées the place was packed — it looks like we weren’t the only ones who wanted to see this movie. And it did not disappoint: this was one of the few times where the movie audience applauded at the end. My advice: go see it. (You’re welcome.)
The story follows three black women who work as “human computers” for NASA in the early 1960’s. “Computer” was actually what they were called; it was a real but low-status job for low-status (female, black) math whizzes in the days before electronic computers, and there were rooms full of them, like steno pools, at NASA. This being Virginia in 1961, our three heroines were relegated even further into the segregated “colored computers” pool. So with the budding Civil Rights movement as backdrop — and this movie excelled at backdrops, with an awesome period score and loads of what looked at least like archival footage — these women broke through racist and misogynist barriers, and got John Glenn into orbit.
And then, just as electronic computers started to threaten their human computing jobs, they figured out how to be the ones to do the necessary work of programming those computers. (It wasn’t in the movie, but programming back then — difficult, exacting, requiring daily brilliance just like now — was another low-status job for “girls.”)
One thing caught me though, not in the story itself but in how the movie was put together. I remember reading once about how some movies were subjected to audience polling, and changes based on that polling, before final release — I wasn’t quite aghast, but it kind of irked me that this was done, and I started seeing what I thought was poll-driven editing everywhere in the movies I watched, and I thought I spotted it here.
There were two (three) parallel stories going on: one (two) involving lowly employee showing them how it’s done, and the other showing the futuristic but inert IBM that NASA purchased being brought to life. The stories were finally brought together, mostly by the juxtaposition of the two “TRIUMPH! THE END” endings, but at one point there seemed to be an aborted attempt at a connection…
The top NASA engineers are trying to figure out some orbital mechanics and realize that they need a different mathematical approach, and Katherine Johnson says “Euler’s Method!” Eureka! But then that’s it: other than a scene where she reads up on the method in an old text, there’s no follow-up. The thing is though, Euler’s method is a numerical method, made up of many simple calculations instead of a few sophisticated ones, and it’s prohibitively impractical as a tool without the electronic computer. I can almost see the missing scenes, where Katherine’s superiors despair of getting the answer in time because there’s just too many calculations, just as Dorothy Vaughan got that old IBM up and running in time to save the day — oh what might have been! …but that’s getting nitpicky, me dreaming up extra scenes, just because I wanted the movie to go on and on.
This movie was morally affirming — righteous even, and patriotic — without being preachy, pro-science without being hokey, and overall a pleasure to watch. Go see it, and see if you don’t applaud too at the end.
I just finished another of my Christmas books, Hillbilly Elegy by J.D. Vance. This was basically the author’s memoir of growing up, in Ohio, as the grandson of Kentucky hill folk who’d moved there looking for a better life, and his struggles with poverty and family dysfunction before his own escape up the socioeconomic ladder. The book has a bit of celebrity status right now, as various blue-state types try to figure out what’s going on in the Appalachian and Rust Belt hinterlands, and what went wrong in the last election…
The first thing I’ll say is the good news: this book is a fast and interesting read, and the author is personable, and engaging if occasionally prone to humble-bragging, and he writes well. Parts of the story reminded me of my own family history, and the class anxieties that come with upward mobility over generations, while other parts were an unsparing look into the darker aspects of his own subculture.
But the bad news is, he never seems to get to the heart of the problems among the Hillbilly Diaspora. He sometimes resorts to church-and-family bromides, and other times seems to warn against the debilitating effects of welfare, but it’s mostly like he’s dancing around a garden variety conservatism. He never really came to any solid conclusion.
I finished the book feeling a bit let down.
Well, the holidays are over, presents unwrapped, toasts drunk and the house guests have all gone home, now it’s time to assess the past year and plan for the coming one…
I have only one resolution for 2017: I will ride my bike a lot more than I did last year. I mean, last year was kind of pathetic, even after accounting for my July mishap.
I have to say though, that this is mileage, and miles don’t tell the whole story: my best and most enjoyable rides (other than June’s long Jim Thorpe solo ride) were those camping trips in Jim Thorpe and Bald Eagle, and the big Fall group MTB rides — and on those rides I mostly held my own, I was nowhere near as out of shape as I feared. (I won’t show it here, but I just looked it up, and my saddle time last year was almost double my 2015 and 2014 times. For what it’s worth…)
Anyway, my resolution is to ride more.
These aren’t resolutions, but here are some other things I want to be a part of my life in 2017:
- Get a better grip on my finances
- Pick up the cello again
- Learn a few computer-assisted design things (mapping, 3D CAD, the web)
- And finally, I want to find a niche in the gig economy, something to bring in some pin money
I’m already on my way for all but the last, but for that one I’m still not sure what I might want to do or how to make it happen.
I just finished one of my Christmas gifts, the Chinese sci-fi novel The Three-Body Problem, by Liu Cixin and translated by Ken Liu. It’s basically a “first contact” thriller, with an enemy alien invasion looming, a secret society helping the invaders, and the governments of the world secretly planning together for war against both the aliens and the secret society, all set against the backdrop of the Cultural Revolution and its aftermath.
The novel suffers from both stilted dialogue and “stilted” plot turns; some of these may be the effects of translation, or even just its essential Chinese-ness, but it’s also true that the author and translator are both in the “hard sci-fi” camp, which is not known for its Literature-With-A-Capital-L virtues. Anyway, the ideas are big and the action moves at a page-turning pace — it’s definitely a good read.
My biggest problem with this book is that it’s the first of a trilogy, and now I have to read the others to find out how it all ends.
“I woke the President to tell him we were under attack by the Russians!
Do you know how stupid that makes me look?!!”
— War Games
I moderate comments here: if you’ve never had a comment approved, all your comments go into a holding tank until I either approve or trash them, though once your first comment has been approved your subsequent comments are all automatically approved. It usually doesn’t matter much, since I don’t get many legitimate comments and have only one commenter, but that’s the way I like it because it blocks comment spam.
The other thing about comments is that I get an email every time one is posted. This is on my “extra” email account, which doesn’t get much use, especially after I unsubscribed from a mailing list I was on. Then this afternoon my phone dinged a few times, and when I looked I had 22 messages, all from this site and saying I had comments in moderation…
My site hadn’t gone viral, it was all just robo-spam: gibberish with a couple of websites thrown in, that kind of thing. I dealt with that set of comments by trashing them, and then a few hours later I got more, which I also dealt with. I noticed, though, that despite different names and email addresses, they were all coming from two internet addresses. I blacklisted those addresses, so now the comments go straight to trash, and I don’t get email notifications.
I just checked the comment trash here, and it had a ton of spam comments. I guess I’ll have to check and empty the trash every so often until this entity gets tired of sending them, but as far as I’m concerned it’s problem solved.
By the way, the offending internet addresses are assigned to a Russian ISP.
UPDATE: The spam continued for about 12 more hours then stopped.
Lately I’ve been getting rid of the books I no longer want, by sticking them in a nearby Little Free Library, but the truth is that I no longer want them because I’m tired of re-re-reading them, and I really need some new books. Christmas is coming, so I updated my Amazon Wish List, and our recent Philly visit for Ben’s birthday included a bit of “one for you and one for me” Christmas shopping at Penn Books. In each situation I looked through the books I’d be likely to enjoy, and one in particular kept coming up: Ready Player One by Ernest Cline, but I decided against getting it in both cases.
Yesterday found me walking past the Little Free Library, and I took a look inside to see how my books were doing — most had been taken, though a few were surprisingly still there — and what do I see but a copy of Ready Player One? So, on the way back from my walk I stopped again at the library, and, in a first for me, I took the book home.
Bottom line: I think I’ll push myself and finish it, but I am really glad I didn’t pay money for this book, or talk a friend or relative into buying it for me. The protagonist is annoying and unlikeable (this may be on purpose though, since he’s a maladjusted 13-year-old at the start of the story), the premise is hackneyed, and — this is a pet peeve of mine — the cultural references are basically our own recent past because, while the story occurs about 50 years into the future, the people there are conveniently obsessed with 80’s electronics, video games, and pop trivia. Lazy…
When I’m done with it — and this might not mean actually getting to the end, just to the point where I get sick of reading it — this book is going straight back to the free library.