I still have no idea what’s going wrong with scanning photographs of QR codes (other than, say, generic image quality issues inherent in the process), but I’ve sort of abandoned the whole QR thing. The obsession ran its course, and there was also this:
We went out last weekend with John and Donna, and also a friend of ours who is a programmer. She asked me about my recent projects and I said I was intrigued with QR codes, and she said something to the effect of “Oh, aren’t they a bit passé?”
What?!?? I asked John, and he also felt that they were a technology that seemed promising maybe a few years ago, but eventually the buzz faded as they were seen to be superfluous — users could write information (or capture the info another way, like near field communication) as easily as they could use a phone to scan and capture it from a QR code.
I went home and did a little Googling and — except in the marketroid world where it definitely seemed passé — the situation wasn’t nearly as dire as the picture my friends painted, but what I saw online did make me reevaluate their usefulness, to take stock as it were, and my interest, already waning, disappeared.
I’m not sure why I did this exactly, but the other day I decided to download a QR code generator onto my phone. I have no real need, but it looked like a fun thing to play with, so I made a few codes (my contact info, “Hello World!” etc), then I thought it would be pretty cool to read and write them from the laptop, so I downloaded a program called qrencode to write them, and one called zbar to read them, and I had a bunch of geeky fun using all my new toys.
Then I got the idea: what if I could take a picture of a QR code, with datestamp and GPS metadata added? I could then extract the QR data, and the time and place it was gathered, like maybe something an inventory program would use. I downloaded another program called exiftools, and found how to get the date/time and location from the photos, but the final step, extracting the QR data from the photo of the QR code image, was a failure. I have no idea why yet.
We saw it the other day, basically as soon as it was out in a nearby theater. We happened to go on a weekday matinée, which is what we usually do, but unlike other matinées the place was packed — it looks like we weren’t the only ones who wanted to see this movie. And it did not disappoint: this was one of the few times where the movie audience applauded at the end. My advice: go see it. (You’re welcome.)
The story follows three black women who work as “human computers” for NASA in the early 1960’s. “Computer” was actually what they were called; it was a real but low-status job for low-status (female, black) math whizzes in the days before electronic computers, and there were rooms full of them, like steno pools, at NASA. This being Virginia in 1961, our three heroines were relegated even further into the segregated “colored computers” pool. So with the budding Civil Rights movement as backdrop — and this movie excelled at backdrops, with an awesome period score and loads of what looked at least like archival footage — these women broke through racist and misogynist barriers, and got John Glenn into orbit.
And then, just as electronic computers started to threaten their human computing jobs, they figured out how to be the ones to do the necessary work of programming those computers. (It wasn’t in the movie, but programming back then — difficult, exacting, requiring daily brilliance just like now — was another low-status job for “girls.”)
One thing caught me though, not in the story itself but in how the movie was put together. I remember reading once about how some movies were subjected to audience polling, and changes based on that polling, before final release — I wasn’t quite aghast, but it kind of irked me that this was done, and I started seeing what I thought was poll-driven editing everywhere in the movies I watched, and I thought I spotted it here.
There were two (three) parallel stories going on: one (two) involving lowly employee showing them how it’s done, and the other showing the futuristic but inert IBM that NASA purchased being brought to life. The stories were finally brought together, mostly by the juxtaposition of the two “TRIUMPH! THE END” endings, but at one point there seemed to be an aborted attempt at a connection…
The top NASA engineers are trying to figure out some orbital mechanics and realize that they need a different mathematical approach, and Katherine Johnson says “Euler’s Method!” Eureka! But then that’s it: other than a scene where she reads up on the method in an old text, there’s no follow-up. The thing is though, Euler’s method is a numerical method, made up of many simple calculations instead of a few sophisticated ones, and it’s prohibitively impractical as a tool without the electronic computer. I can almost see the missing scenes, where Katherine’s superiors despair of getting the answer in time because there’s just too many calculations, just as Dorothy Vaughan got that old IBM up and running in time to save the day — oh what might have been! …but that’s getting nitpicky, me dreaming up extra scenes, just because I wanted the movie to go on and on.
This movie was morally affirming — righteous even, and patriotic — without being preachy, pro-science without being hokey, and overall a pleasure to watch. Go see it, and see if you don’t applaud too at the end.
“I woke the President to tell him we were under attack by the Russians!
Do you know how stupid that makes me look?!!”
— War Games
I moderate comments here: if you’ve never had a comment approved, all your comments go into a holding tank until I either approve or trash them, though once your first comment has been approved your subsequent comments are all automatically approved. It usually doesn’t matter much, since I don’t get many legitimate comments and have only one commenter, but that’s the way I like it because it blocks comment spam.
The other thing about comments is that I get an email every time one is posted. This is on my “extra” email account, which doesn’t get much use, especially after I unsubscribed from a mailing list I was on. Then this afternoon my phone dinged a few times, and when I looked I had 22 messages, all from this site and saying I had comments in moderation…
My site hadn’t gone viral, it was all just robo-spam: gibberish with a couple of websites thrown in, that kind of thing. I dealt with that set of comments by trashing them, and then a few hours later I got more, which I also dealt with. I noticed, though, that despite different names and email addresses, they were all coming from two internet addresses. I blacklisted those addresses, so now the comments go straight to trash, and I don’t get email notifications.
I just checked the comment trash here, and it had a ton of spam comments. I guess I’ll have to check and empty the trash every so often until this entity gets tired of sending them, but as far as I’m concerned it’s problem solved.
By the way, the offending internet addresses are assigned to a Russian ISP.
UPDATE: The spam continued for about 12 more hours then stopped.
I had a problem to solve at work last year, basically to make a cone out of bent tubes, to form a cone-shaped “throat opening” in a wall made of vertical tubes. The task needed a bit of iterative trial-and-error to solve for each tube, which quickly becomes tedious when there are maybe a dozen tubes that have to be looked at — half a day’s work — for any given throat configuration, and there were a bunch of configurations we wanted to explore.
You can read about it here, but after that first day of tedium I decided to see if I could automate the process. I wrote a short C program, including set of vector functions and a root-finding function (using the Bisection Algorithm, which is supposedly slow but fast enough for my purpose — more important to me was that it’s pretty robust, and guaranteed to work in my situation), to find the necessary workpoints and design requirements for an individual tube in the cone. I then wrote another program to generate the input data for each individual tube, based on the tube, wall and cone parameters. I could give the “cone_maker” program the tube OD, bend radius and minimum allowed straight between bends (tube parameters), the number of tubes and tube spacing on the wall (wall parameters), and the cone inner and outer diameter (cone parameters), and pipe the results through my original “throat tube calculator” program, to get the data I needed. The programming took about two days, maybe a total of four actual hours of programming time, and it ran — flawlessly — in seconds.
Unfortunately, to use the program I had to go through a whole rigmarole, running it on my SDF free shell account and accessing it on my phone via ssh, since we had no real resources for running or compiling programs at work. The process was faster, but still very tedious — you try typing dozens of numbers into and reading the results off a tiny phone screen — but it got the job done.
The program did what it needed to, and it looked like I wouldn’t ever need to use it anymore, but I started thinking about program improvements to make the tube design process easier. You can read about these changes here, but what I decided to do was add new output options to the throat tube bend calculator: one option that produces AutoCAD commands to draw the “skeleton” of the tubes, and another to create a lisp file (AutoCAD uses lisp as its scripting language) to make a 3D model of the cone tubes. This took more work than it needed to because checking the results had to be done at work, while coding had to be done at home, but within days I had the program output running smoothly. I then armored the programs and turned them into a CGI script, and made a web page to access it.
Here’s the calculator web page, and the results can be seen to the left. I had absolutely no use for the calculator anymore, but it sure was fun to play with.
Fast forward to now, and I thought it would be fun to play with again — unfortunately, I don’t have AutoCAD at home, and am not likely to get it anytime soon, but I do have a program called FreeCAD. Now FreeCAD does not use the same things AutoCAD does, but it does have a built-in scripting language: Python.
Python has been on my radar for a while, and with my recent QGIS forays (QGIS also uses Python as a scripting language) I’ve been motivated to learn a bit more about it. Then I happened to see my version of FreeCAD get auto-updated the other day, and thought it would be nice to play with, and maybe pick up on some Python on the way….
So I rewrote my cone maker & tube calculator programs in Python script. Much (but not all) of the vector stuff is available in a library, and so are root-finding algorithms — just for laughs I used Brent’s Algorithm, a faster version of Bisection — and Python code is naturally more compact-looking than C, so the final program looked really nice, and much shorter than my original C programs. In terms of running, there seemed to be a lag at first (probably importing all the libraries I called for), but the output just about spit itself out.
Once I got the program to produce correct numerical output, I moved it into FreeCAD and started figuring out how to create the tubes. This took a bit of research, and a bit of trial and error, but the whole learning process took less than a day and then it was running beautifully — you can see the results to the right, and the full throat below.
So what else have I been up to lately? I decided to look a bit more closely at QGIS, the open-source GIS program, and so I found a group of online courses on using it. They’re free, and you kind of get what you pay for here, but they’ve been an eye-opener into QGIS and its capabilities — it’s a much more powerful program than I realized, and with the ability to run R, GRASS and Python scripts, as well as automating tasks (and linking them together like unix pipelines), it’s got almost limitless expandability. I’m working through the third course (of five) right now, and when I’m done with these I may start looking into possibly going further.
I love my Garmin, but the map it came with was horrible, so I replaced it with one from OpenStreetMap. (This is not news; I got the map years ago.) The process is tedious but pretty simple: there are sites you go to, and you pick what parts of the world you want a map of, then they do some data processing and email you to let you know when you can download your map file. The files are huge, like 3-4 GB for the one I got for North America, and they take a while to process and even longer to download. But once you have the file, you just put it on a micro-SD card, stick it in your GPS, and voilà — a much better map!
Maybe it was the choice of map file I made back then, or maybe OpenStreetMap back then was less complete, but my map didn’t have many offroad trails. I didn’t feel the lack too sorely, since on most of my offroad rides I already know where I am, but after the last big ride — when I had become a bit lost — I looked at our path on the latest OpenStreetMap cycling map, and I saw all the trails through the strip mines — singletrack, jeep road and all. Boy, wouldn’t that have been nice to have on the ride! I also noticed that all the trails on Broad Mountain are now on the map, including the “secret singletrack.” I’ve done a couple of (road) rides recently, where I mapped out a course online and then downloaded it to the Garmin and used its routing features, “turn left onto Main Street in 100 yards” etc, to follow my course, and I thought that it would be a great thing to try routing with an offroad ride. The only thing I’d need would be routable trail maps…
My understanding of the Garmin 810 is that multiple maps can be installed and enabled, and I’d been reading up on how to make the Garmin maps. (For years I thought it would be a cool project to make small custom maps of local trail systems, either standalone or as add-ons to a base map, but other than some re-purposing of GPX tracks I never really pursued it.) I didn’t feel like going through the process of downloading another huge (updated) map of North America from that map service again, but generating much smaller add-on maps myself, using OpenStreetMap data and the same software the original map service used, seemed to be fairly straightforward, and I could make a smaller updated file to add to my base map.
So that’s exactly what I did: I downloaded the data for a region around Jim Thorpe and saved it on my machine, then ran a Java program called “mkgmap” to create the map file. Installed it on my Garmin, and voilà — the trails were there! I then created a course online, following some Broad Mountain trails I know well, and arranged to go riding with Rich B.
Results were mixed. Our ride was great, but the downloaded route beeped an error message while loading and would not do any routing, though it would show the route on the map, and would indicate if we went off course. I got home and found that I’d compiled the map without routing capabilities, so I recompiled and reloaded my new map; it still awaits testing since I don’t get up to Jim Thorpe in a very regular basis.
Meantime, I thought I’d make a similar map for the trails at Lake Nockamixon, since I did have immediate plans to ride there. I drew up a course to follow (which worked fine), and compiled a map of the Nockamixon area, but this new map would not display on, or even be recognized by, my GPS. I tried making a few other maps, but the only one that ever worked was the original Jim Thorpe one, and I have no idea why. I eventually got so frustrated that I went out and bought a new micro-SD card, and re-downloaded the map of North America, a process that took about six hours (though I wasn’t actually present for most of it).
My next offroad ride will include a test of the trail routing capability of my new map. It better work.
(Just as an aside: my resting heart rate this morning was 49 BPM.)
A computer update: the file selection dialog boxes on my machine, as well as quite a few programs, rely on GTK+3 widgets, but my desktop is really MATE (using GTK+2), and the generic theme that the GTK+3 stuff gets rendered in was just plain ugly, so I added an upgrade to the Cinnamon desktop, and have been playing with that for a bit. My thoughts so far:
Cinnamon seems faster, and seems to also use less computing power.
It (Cinnamon) seems less complete, and more buggy, than MATE.
It has some clean looks, but it is really plain, and all my favorite little pieces of eye candy are gone.
I tried going back to the MATE desktop, but I found that even though I enjoyed having my toys back, the ugly GTK+3 programs really were too much, and so I’m trying to get used to my new plain-Jane desktop. (I should look into installing a MATE theme that works for both GTK+2 and 3. Then I could go home again.)
But in the meantime… The next time I fired up the Java Open Street Map editor (after installing the new Java), JOSM just puked a bunch of error messages. No idea what went wrong, but I was launching it with the old Java Runtime Environment, and I thought that maybe there was an interference with the new environment I installed. So, I tried launching it with the new JRE, and it came up just fine, along with a message that, at long last, it too had been upgraded to the new Java 8, purely by coincidence but at the same time as my own fiddling.
Anyway, all is well again, if slightly dull, in my computer land.
I took the bike up to Sals for its maiden voyage, and I managed to catch a tree with the left handlebar — the handlebars are much wider than the ones on my Turner, or any of my other bikes — on a downhill no less, just after the 3 B’s climb, and it dumped me at speed into the rocks. I landed on my right knee, hard enough to see stars, and to literally bounce across the trail and roll down the hill. It took me 10 minutes to even get up, I was convinced I’d broken something, and I had to walk most of the way home. I spent most of the afternoon and evening with ice on my knee, and I will be doing the same today.
Other than that it was an OK ride, not awesome but OK. The Santa Cruz rides quite differently than the Turner did, and there will be some things I will just have to get used to, and a few things I’ll need to do — suspension adjustments, seat height, possibly change the handlebar length (shorter) and the stem length (longer) — to dial in the ride. I would like for the bike to be a bit more responsive in turns, but that may come with time and those adjustments.
There are three pieces of new technology, new to me anyway, on this new bike: tubeless tires, an adjustable seatpost, and 1×11 gearing. The tires are probably an improvement, but one — the absence of flats — that I might not really notice, and the seatpost is a cool gimmick so far, but it’ll be a while before it’s really incorporated into my riding; the new gearing is a bit more problematic. I went from 17 effective gears on the Turner’s 3X9 to just 11 here, and it seems like I have less of a high end, and less of a low end, as well as a less fine-grained set of gear choices. This may be the hardest thing to get used to, but there is apparently no going back: triple chainrings, and even doubles, are being phased out on mountain bikes, this is a weight savings for what could be a bigger and heavier bike, and I think the Santa Cruz has a lower bottom bracket, so a smaller, single chainring helps with ground clearance.
Anyway, the bike seemed to perform well, especially on downhills, though the big crash wasn’t my only one yesterday, and though it seemed both twitchy (the short stem) and hard to turn (the long wheelbase) it did well enough at Sals. Unfortunately, it’ll be a while before I get to ride it again, and even worse, I’m going to have to bail on the Wilderness 101 this weekend.
That’s right, no W101. We saw Renee last night and I had to give her the bad news. I felt like such a disappointment, but I won’t be walking much, much less riding, in the next week, and even if I could ride, my knee could never handle 100 miles the way it feels now. Timing is everything.
I’ve had trouble recently with using my email here at donkelly.net: some — not all, but some — networks wouldn’t communicate with mine, emails couldn’t be exchanged, and looking into why that was so, at say, SDF.org I found that they couldn’t even resolve my domain. The domain always resolved on my home computer though, so some DNS was working somehow.
But my laptop generally uses the DNS server on whatever wifi it’s connected to, and now, connecting here on Rainbow Lake, whatever DNS server they use wouldn’t resolve my domain — which I took to mean uh oh, there really is a problem with my setup and not just at SDF or whatever.
I checked my DNS info using third party websites and found that here were some major discrepancies — there were four nameservers listed rather than two, and two of them didn’t work. Turns out the original ones had been retired (by my service provider) but my system hadn’t been updated, and the broken servers were the retired originals, which were the only ones listed in my site’s configuration — I have no idea where/how the correct nameservers got listed. I went in and removed the bad servers, added the good ones to my configuration, gave it a few hours, and now even the formerly broken emails seem to work.
I could have, and should have, done something about this months ago.