MetaFilter: Two days of Bowie-inspired radio programming

Last weekend, NTS and Sonos presented a full weekend of programming celebrating the David Bowie, broadcasting direct from the new Sonos London store on Seven Dials in Covent Garden. Hosts included Dev Hynes, Iggy Pop, Thurston Moore, Connan Mockasin, Neneh Cherry, and many more. The full archive is here; descriptions of individual shows (as provided by the NTS website), with links to each show, follow.

Dylan Jones (show link)
Dylan Jones, perhaps best known as editor of GQ magazine, will be kicking off the two-day broadcast by reading excerpts of interviews from his recent oral history, titled David Bowie: A Life.

Dev Hynes (show link)
The New York-based pop auteur and songwriter Devonté Hynes (also known by his stage name, Blood Orange), is compiling an intricately constructed audio-collage slash soundscape dedicated to the life and work of David Bowie.

Ross Allen (show link)
NTS stalwart, music buff and all around good guy Ross Allen will be examining the R&B and soul influences in Bowie's work during the mid-seventies, particularly around the album 'Young Americans'.

Tim Noakes (show link)
Tim Noakes is the synthesizer fanatic behind the monthly Synth Hero show on NTS Radio. Tim will be exploring Bowie's lasting influence on electronic music live from Sonos London on Seven Dials.

Michael Rother (show link)
The founding member of legendary krautrock band, Neu!, Michael Rother will be using his extensive back catalogue to illustrate the pull East Berlin had on David Bowie and his interaction with the krautrock and kosmische scene.

Nabihah Iqbal (show link)
The NTS regular and Ninja Tune-signed artist Nabihah Iqbal (formerly known as Throwing Shade) will be exploring the creation and inspiration of Bowie's eleventh album, Low, originally released in 1977.

Leyla Pillai (show link)
Leyla is the genius behind the longstanding NTS show 'Who's That Girl'. Expect a deep dive into Bowie's psyche as Leyla paints an expressive and illuminating audio-portrait of the iconic star.

Franz Ferdinand (show link)
The Glaswegian indie outfit will be compiling their own hour-long tribute to the Brixton-born songwriter.

Iggy Pop (show link)
As a long-time friend and collaborator of David Bowie, Iggy Pop will take listeners on an intimate journey through his songs and the influences that helped create them.

Bat For Lashes (show link)
Natasha Khan, a.k.a Bat For Lashes, will be looking at David Bowie as a visual artist, actor and musician, by revisiting the soundtracks of the films in which he took part as well as other inspirational cues.

Bullion (show link)
"Pop, not slop!" Nathan Jenkins, a.k.a Bullion, is the man behind London label DEEK Recordings. For his NTS x Sonos show Nathan will be picking apart Bowie's '77 album, Heroes, and exploring his collaborative relationship with Brian Eno.

Charlie Bones (show link)
The NTS breakfast host Charlie 'Eagle' Bones will be playing a two-hour tribute to the great David Bowie, guiding listeners through a selection of his favourite Bowie and Bowie-adjacent records.

Neneh Cherry (show link)
The Swedish songwriter and artist Neneh Cherry will be playing a varied selection of obscure and rare David Bowie covers from all corners of the globe.

Thurston Moore (show link)
After sharing a stage with Bowie for his fiftieth birthday in Sonic Youth, Thurston Moore will be sharing his own experience of the pop icon and champion of the avant-garde.

Connan Mockasin (show link)
Connan Mockasin will be sharing his formative Bowie experiences through his love of the 1996 Australia-only CD compilation, London Boy, which Mockasin listened to repeatedly on first moving to London.

David Holmes (show link)
The acclaimed composer and NTS hosts David Holmes will be looking at David Bowie the magpie - looking closely at the influences he picked up and reinterpreted throughout his career.

Slashdot: Sacramento Regional Transit Systems Hit By Hacker

Zorro shares a report from CBS Local: Sacramento Regional Transit is the one being taken for a ride on this night, by a computer hacker. That hacker forced RT to halt its operating systems that take credit card payments, and assigns buses and trains to their routes. The local transit agency alerted federal agents following an attack on their computers that riders may not have noticed Monday. "We actually had the hackers get into our system, and systematically start erasing programs and data," Deputy General Manager Mark Lonergan. Inside RT's headquarters, computer systems were taken down after the hacker deleted 30 million files. The hacker also demanded a ransom in bitcoin, and left a message on the RT website reading "I'm sorry to modify the home page, I'm good hacker, I just want to help you fix these vulnerability."

Read more of this story at Slashdot.

Instructables: exploring - featured: 20 Hour $20 Table Top Arcade Build With Hundreds of Games Built In.

I'd been wanting to make something like this for a while but was in no rush with plenty of other projects always to do. Since I was in no rush I just waited until I accumulated all the necessary components for the build at inexpensive prices. Here's a list of the price of each component & where I fo...
By: Nesmaniac

Continue Reading »

Slashdot: FCC Will Also Order States To Scrap Plans For Their Own Net Neutrality Laws

An anonymous reader quotes a report from Ars Technica: In addition to ditching its own net neutrality rules, the Federal Communications Commission also plans to tell state and local governments that they cannot impose local laws regulating broadband service. This detail was revealed by senior FCC officials in a phone briefing with reporters today, and it is a victory for broadband providers that asked for widespread preemption of state laws. FCC Chairman Ajit Pai's proposed order finds that state and local laws must be preempted if they conflict with the U.S. government's policy of deregulating broadband Internet service, FCC officials said. The FCC will vote on the order at its December 14 meeting. It isn't clear yet exactly how extensive the preemption will be. Preemption would clearly prevent states from imposing net neutrality laws similar to the ones being repealed by the FCC, but it could also prevent state laws related to the privacy of Internet users or other consumer protections. Pai's staff said that states and other localities do not have jurisdiction over broadband because it is an interstate service and that it would subvert federal policy for states and localities to impose their own rules.

Read more of this story at Slashdot.

Bifurcated Rivets: From FB


Bifurcated Rivets: From FB

Good grief

Bifurcated Rivets: From FB


Bifurcated Rivets: From FB


Bifurcated Rivets: From FB

Interesting Net-FullAuto-1.0000398

Perl Based Secure Distributed Computing Network Process

Recent additions: lenz

Added by MatthewFarkasDyck, Tue Nov 21 23:50:33 UTC 2017.

Van Laarhoven lenses

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: Up, up, up

Tuesday rush hour, at the foot of Bay Street outside Union Station, downtown Toronto.


Around the time of Y2K, which most Millennials never heard of, investors went goey for tech. Nortel exploded (before it imploded). The NASDAQ surged daily. Kids with skateboards and cool URLs went public with profitless companies and made millions. Then it all blew up.

The tech bubble bust. Science and technology mutual funds – then an absolute rage – lost 80% of their value. Investors who had gone all-in, believing dot-coms and the Internet would become the backbone of modern life, were crushed. Panic selling in late 200 and early 2001. Then came Nine Eleven to finish things off.

Now we know the techies were right. Seventeen years ago it was inconceivable every single person on the sidewalk would have a smart phone, that Uber would replace taxis, AirBnB supplant hotels, landlines become relics, Russia use Facebook to throw the US election or something online called Amazon be one of the biggest companies in the world. And, hey, let’s not forget about yesterday’s blockchain bubble blog.

So what did people do wrong in the first tech-fueled equity romp when they got all sauced up with hormones and hype? Two things. They engaged in massive speculation, bidding up the value of companies with visionary ideas, epic burn rates and no earnings (did someone just say ‘Tesla’?). Investors were swept along on vision and promise, not profits and dividends. They paid a massive price. They will again.

Second, they lacked balance. Thinking diversification was for people dumber than them and only old ladies bought bonds or blue chips, they passionately put 100% of their investment bucks into one thing (did I just hear ‘Bitcoin’?). When the herd was moving in the same direction, it was euphoric and thrilling. When reality arrived, so did the losses.

All this seems relevant at a time when tech issues have been propelling stock markets to new highs. Happened again Tuesday. The Dow, S&P and Nasdaq all soared. Even the limp old TSX has been catching up, and also sits at a pinnacle. As I mentioned days ago, the same has taken place in Japan, Europe and with emerging markets.

The obvious question: another bubble? A rerun of 2000, or even 2008?

Last time I said it was irrational to expect a 20% market correction just because an index had reached a new summit. Comparisons with Y2K or the credit crisis (or 1929, 0r 1987) are meaningless without context. This time loads of tech-based companies are making money. Look at Apple ($45 billion profit in three months) or Google ($22 billion profit). Amazon is trashing department stores and just bought a supermarket chain. Practical, real-life, broad-based applications of devices, products and services using online platforms have created wildly successful corporations. Share prices have soared. Investors collect dividends. These are valid businesses.

So the Dow looks vertical mostly because companies are making money. Corporate profits are advancing double-digits. There is more to come.

The advance will also come on the shoulders of a humungous US corporate tax cut (35% down to 20%), full employment in the US, and the beating-down of anti-global populism in Europe. The world economy is growing at 3%. Bankers worried about deflation two years ago are now raising interest rates to quell inflation. Never again in the life of you or your children will there be 2% long-term mortgages.

This is a world in which you should expect growth to continue, and be reflected in financial assets. But as markets move higher, spurred by entrepreneurs and execs who have figured out how to make passion also make money, there’ll be corrections, dips, shocks and surprises. The higher they go, the more violent the moves.

So we seemed to have figured out the tech thing (with some notable bubble companies and commodities). Now you need to understand balance. It’s a hard lesson to grasp when everything’s going up. Not when they fall.

The point of a 60/40 portfolio is to have two-fifths of your money in stuff that’s less volatile, pays a predictable income stream or is negatively correlated to the more excitable equity markets. No, 40% should not go into bonds – only some. The best choice right now (for a portfolio large enough to have several positions) is 10% in short government bonds, 6% in corporates, 3% each in high-yield and provincial debt, plus a little cash and about 15% in preferred shares. The prefs turn out a 4% tax-advantaged dividend plus increase in capital value along with rates. The bonds stifle volatility and have a history of rising when stocks are falling. Of course, do all this through ETFs, to spread the risk.

So far this year a balanced, diversified portfolio is ahead about 9.5%. Tesla stock is up 66%. Bitcoin is ahead 700%.

Sexy isn’t everything. Look at me.

Recent additions: lenz

Added by MatthewFarkasDyck, Tue Nov 21 23:44:52 UTC 2017.

Van Laarhoven lenses Perl-Critic-1.131_01

Critique Perl source code for best-practices. Test-Simple-1.302114-TRIAL

Basic utilities for writing tests.

Slashdot: Uber Fined $8.9 Million In Colorado For Allowing Drivers With Felonies, Motor Violations To Work

Uber has been fined by a Colorado regulator on Monday for nearly $9 million, after an investigation revealed that 57 people with criminal and motor vehicle offenses were allowed to drive with the ride-hailing company. Jalopnik reports: States across the U.S. have been considering laws to require additional background checks for individuals who drive for Uber and competitors like Lyft. In Colorado, the state's Public Utilities Commission investigated the company's drivers after an incident this past March, reported The Denver Post, when a driver dragged a passenger out of a car and kicked them in the face. The commission said it found 57 drivers had issues that should've disqualified them from driving for Uber, including felony convictions for driving under the influence and reckless driving, while others had revoked, suspended or canceled licenses. A similar investigation was conducted on Lyft, the Post reported, but no violations were revealed. An Uber spokesperson said the situation stems from a "process error" that was "inconsistent with Colorado's ridesharing regulations." The spokesperson said Uber "proactively notified" the commission. "This error affected a small number of drivers and we immediately took corrective action," the company said in a statement to the Post. "Per Uber safety policies and Colorado state regulations, drivers with access to the Uber app must undergo a nationally accredited third-party background screening. We will continue to work closely with the CPUC to enable access to safe, reliable transportation options for all Coloradans."

Read more of this story at Slashdot.

MetaFilter: 'Tashes through time

Prehi(p)storic An early history of the ostentatious moustache, a storymap from the Early Celtic Art in Context project.

Slashdot: HP Enterprise CEO Meg Whitman To Step Down

Hewlett Packard Enterprise's Meg Whitman is stepping down as chief executive officer. Reuters reports: Whitman engineered the biggest breakup in corporate history during her 6 year tenure at the helm, creating HPE and PC-and-printer business HP Inc from parent Hewlett Packard Co in 2015. Whitman will be succeeded by the company's president, Antonio Neri, who takes over from Feb. 1. "Now is the right time for Antonio and a new generation of leaders to take the reins of HPE," Whitman said in a statement. Whitman, who will continue as a board member, had been steering the company towards areas such as networking, storage and technology services.

Read more of this story at Slashdot. Mutex-1.005

Various locking implementations supporting processes and threads MCE-Shared-1.833

MCE extension for sharing data supporting threads and processes

Instructables: exploring - featured: Kombucha Wallet

The Kombucha wallet is made with the cellulose produced by a colony of bacteria and yeasts that grow on the growing surface of the probiotic known as Kombucha.This cellulose is extremely resistant and is a great sustainable alternative to the use of animal leather, paper and cardboard. Growing the...
By: Zampa

Continue Reading »

Instructables: exploring - featured: A Different Way to Blow Up a Balloon

Baking soda + vinegar. This well known reaction that is so enjoyed by kids and kids at heart, can be used to teach several basic science concepts. Not just a cool science trick, this Instructable will walk you through how to set up a science lesson to explore several concepts. The enduring understan...
By: KMonsma

Continue Reading »

Slashdot: Uber Concealed Cyberattack That Exposed 57 Million People's Data

According to Bloomberg, hackers stole the personal data of 57 million customers and drivers from Uber. The massive breach was reportedly concealed by the company for more than a year. From the report: Compromised data from the October 2016 attack included names, email addresses and phone numbers of 50 million Uber riders around the world, the company told Bloomberg on Tuesday. The personal information of about 7 million drivers were accessed as well, including some 600,000 U.S. driver's license numbers. No Social Security numbers, credit card details, trip location info or other data were taken, Uber said. At the time of the incident, Uber was negotiating with U.S. regulators investigating separate claims of privacy violations. Uber now says it had a legal obligation to report the hack to regulators and to drivers whose license numbers were taken. Instead, the company paid hackers $100,000 to delete the data and keep the breach quiet. Uber said it believes the information was never used but declined to disclose the identities of the attackers. Here's how the hack went down: Two attackers accessed a private GitHub coding site used by Uber software engineers and then used login credentials they obtained there to access data stored on an Amazon Web Services account that handled computing tasks for the company. From there, the hackers discovered an archive of rider and driver information. Later, they emailed Uber asking for money, according to the company.

Read more of this story at Slashdot.

Planet Haskell: The GHC Team: GHC 8.2.2 is available

The GHC Team is pleased to announce a new minor release of GHC. This release builds on the performance and stability improvements of 8.2.1, fixing a variety of correctness bugs, improving error messages, and making the compiler more portable.

Notable bug-fixes include

  • A correctness issue resulting in segmentation faults in some FFI-users (#13707, #14346)
  • A correctness issue resulting in undefined behavior in some programs using STM (#14171)
  • A bug which may have manifested in segmentation faults in out-of-memory condition (#14329)
  • clearBit of Natural no longer bottoms (#13203)
  • A specialisation bug resulting in exponential blowup of compilation time in some specialisation-intensive programs (#14379)
  • ghc-pkg now works even in environments with misconfigured NFS mounts (#13945)
  • GHC again supports production of position-independent executables (#13702)

A thorough list of the changes in the release can be found in the release notes,

How to get it

This release can be downloaded from

For older versions see

We supply binary builds in the native package format for many platforms, and the source distribution is available from the same place.


Haskell is a standard lazy functional programming language.

GHC is a state-of-the-art programming suite for Haskell. Included is an optimising compiler generating efficient code for a variety of platforms, together with an interactive system for convenient, quick development. The distribution includes space and time profiling facilities, a large collection of libraries, and support for various language extensions, including concurrency, exceptions, and foreign language interfaces. GHC is distributed under a BSD-style open source license.

A wide variety of Haskell related resources (tutorials, libraries, specifications, documentation, compilers, interpreters, references, contact information, links to research groups) are available from the Haskell home page (see below).

On-line GHC-related resources

Relevant URLs on the World-Wide Web:

Supported Platforms

The list of platforms we support, and the people responsible for them, is here

Ports to other platforms are possible with varying degrees of difficulty. The Building Guide describes how to go about porting to a new platform.


We welcome new contributors. Instructions on accessing our source code repository, and getting started with hacking on GHC, are available from the GHC's developer's site run by Trac.

Community Resources

There are mailing lists for GHC users, develpoers, and monitoring bug tracker activity; to subscribe, use the Mailman web interface.

There are several other Haskell and GHC-related mailing lists on; for the full list, see the lists page.

Some GHC developers hang out on the #ghc and #haskell of the Freenode IRC network, too. See the Haskell wiki for details.

Please report bugs using our bug tracking system. Instructions on reporting bugs can be found here.

Recent additions: compose-ltr 0.2.4

Added by Wizek, Tue Nov 21 22:01:25 UTC 2017.

More intuitive, left-to-right function composition.

ScreenAnarchy: Psycho Pompous: Expressionist Horror, Part II: The Man Who Laughs, An Expansion

The Man Who Laughs is, for the most part, not a horror film. It is a melodrama and a tragic love story in which many of the melancholy elements are twisted into a haunting gothic representation of the emotional states of the main characters. Now... Why are we spending yet another installment of this column talking about this film? Besides finally getting to the movie itself, it’s because The Man Who Laughs injected the right elements into the horror scene at precisely the right time. This is when (in American horror filmmaking) the priority would shift from light to shadow, from daydreams to nightmares. German expressionism had arrived to upset the storytelling pastimes and ideals of American romanticism, from which horror films would never be...

[Read the whole post on]

MetaFilter: Once he started, it was all about the stops...

Christopher Herwig is back with more wild architectural wonders: When Christopher Herwig, a Canadian photographer, first embarked on his arduous long-distance cycle from London to St Petersburg back in 2002, the outlandishly designed bus stop was nothing more than a pleasing oddity. What Herwig didn't expect was that this was only the start of his life-long obsession; there were similarly peculiar roadside shelters scattered across the post-Soviet world. His Soviet Bus Stops Volume II is a new collection of bus stop photos from remote areas of Georgia, Ukraine, and Russia. Herwig previously on Metafilter: A fascinating journey of architectural obsession (also previously and previouslier).

MetaFilter: "I've been keeping a straight face for thirty-five years."

The Church of the SubGenius Finally Plays It Straight , Eddie Smith

Instructables: exploring - featured: PVC and the Art of SCUBA Maintenance

Like most surfers and all scuba divers I needed a way to dry and store my wetsuits and some of the other gear. Drying and storing a wetsuit requires a bit of care, and a soft touch. If the hanger puts too much pressure at any one point it could crease the neoprene. The suit must also be held open...
By: laserline

Continue Reading »

Colossal: Surreal Animated Photos and Artworks by Nicolas Monterrat

Illustrator and animator Nicolas Monterrat (previously) has brought his wild imagination to historical photographs and artworks that he sets in motion and shares on Ello. The short animations blend images borrowed from old catalogues, newspapers, and textbooks with snippets of abstract footage to create collage-like images that range from humorous to downright terrifying. You can follow more from the Paris-based artist on Tumblr. (via Cross Connect)

Instructables: exploring - featured: Jump Starting Car With Drill's Battery

3 years ago, I published an Instructables where I demonstrated a way to jump start a car using a battery from a drill.Some people were not sure if it will work on bigger cars as the car I used at the time was Kia Picanto.In this improved Instructable, we'll jump start 2009 Opel Zafira, 2L Diesel and...
By: ShakeTheFuture

Continue Reading »

OCaml Planet: Eighteenth OCaml compiler hacking evening at Pembroke, Cambridge

Our next OCaml Compiler Hacking event will be on Thursday 7th December in The Thomas Gray Room at Pembroke College, Cambridge.

If you're planning to come along, it'd be helpful if you could indicate interest via Doodle and sign up to the mailing list to receive updates.

When: Thursday 7 December 2017, 19:00 - 22:30

Where: The Thomas Gray Room, Pembroke College, Cambridge CB2 1RF

Who: anyone interested in improving OCaml. Knowledge of OCaml programming will obviously be helpful, but prior experience of working on OCaml internals isn't necessary.

Refreshments: Finger buffet in hack room.

What: fixing bugs, implementing new features, learning about OCaml internals


This hack evening focuses on fixing up opam packages as well as work on the OCaml compiler.

The OCaml 4.06 release featured safe-string as the default rather than optional as per previous releases. The focus of this event will be to work on the opam repository to fix up as many packages as possible, and also to publish a guide on detailing how to migrate your packages for wider use.

The evening will also feature a short (5-10 min) presentation about recent MirageOS Marrakech Hack Retreat.​

MetaFilter: The Ordovician What?

I do love the Cambrian Explosion but this is just as spectacular. I checked the link to the original publication but it only leads to an abstract so this article is better.

Recent additions: http2-client

Added by LucasDiCioccio, Tue Nov 21 20:50:58 UTC 2017.

A native HTTP2 client library.

Daniel Lemire's blog: Do relational databases evolve toward rigidity?

The Hanson law of computing states that:

Any software system, including advanced intelligence, is bound to decline over time. It becomes less flexible and more fragile.

I have argued at length that Hanson is wrong. My main argument is empirical: we build much of our civilization on old software, including a lot of open-source software.

We often build new software to do new things, but that’s on top of older software. Maybe you are looking at your smartphone and you think that you are using software built in the last 4 years. If you think so, you are quite wrong.

So it is not the case that old software becomes obviously less useful or somehow less flexible with time. Yet, to adapt to new conditions, old software often needs “rejuvenation” which we typically call “refactoring”. Old database systems like MySQL were designed before JSON and XML even existed. They have since been updated so that they can deal with these data types efficiently.

So old widely used software tends to get updated, refactored, reengineered…

Viewed at a global scale, software evolves by natural selection. Old software that cannot adapt tends to die off.

There has been a fair amount of work in software aging. However, much of the work is of an interested nature: they want to provide guidance to engineers as to when they should engage into refactoring work (to rejuvenate their software). They are less interested in the less practical problem of determining how software evolves and dies.

Software often relies on database tables. These tables are defined by the attributes that make them up. In theory, we can change these attributes, add new ones, remove old ones. Because open-source software gives us access to these tables, we can see how they evolve. Vassiliadis and Zarras recently published an interesting empirical paper on this question.

Their core result is that tables with lots of attributes (wide tables) tend to survive a long time unchanged. Thinner tables (with fewer attributes) die young. Why is that? One reason might be that wide tables covering lots of attributes tend to have lots of code depending on it. Thus changing these tables is expensive: it might require a large refactoring effort. Thus these wide tables tend to stick around and they contribute to “software rigidity”. That is, old software will accumulate these wide tables that are too expensive to change.

I believe that this “evolution toward rigidity” is real. But it is less of a general feature of software, and more of a particular defect of the relational database model.

This defect, in my view, is as follows. The relational model makes the important recognition that some attributes depend on other attributes (sometimes called “functional dependencies”). So if you have the employee identifier, you can get his name and his rank. From his rank, you might get his salary. From this useful starting point, we get two problems:

  1. Instead of simply treating these dependencies between attributes as first-class citizens, the relational model does away with them, by instead representing them as “tables” where, somehow, attributes need to be regrouped. So, incredibly, the SQL language has no notion of functional dependency. Instead, it has keys and tables. These are not the same ideas!

    Why did functional dependencies get mapped to keys and tables? Simply because this is a natural and convenient way to implement functional dependencies. So we somehow get that “employee identifier, name, rank” get aggregated together. This arbitrary glue leads to rigidity as more and more attributes get lumped together. You cannot reengineer just one dependency or one attribute, without possibly affecting a lot of code.

  2. Functional dependencies are nice, but far more inflexible and limited than it seems at first. For example, some people have more than one name. People change name, actually quite often. Some information might be unknown, uncertain. To cope with uncertain or unknown data, the inventor of the relational model added “null” markers to his model, and some kind of three-value logic that is not even consistent. In a recent paper with Badia, I showed that it is not even possible, in principle, to extend functional dependencies to a open-world model (e.g., as represented by disjunctive tables).

So I would say that relational databases tend to favor rigidity over time.

There are some counterpoints that may contribute to explain why the sky is not falling despite this very real problem:

  • Programmers have a pragmatic approach. In practice, people have never really taken the relational model to its word: SQL is not a faithful implementation of the relational model. Ultimately, you can use a relational database as a simple key-value store. Nobody is forcing you to adopt wide tables. So there is more flexibility than it appears.
  • There are many other instances of rigidity. Constitutions change incrementally because it is too hard to negotiate large changes. Biology is based on DNA and it is unlikely to change anytime soon. Mathematics is based on standard axioms, and we are not likely to revisit them anytime soon. So it is not surprising that we end up locked into patterns. And it is not necessarily dramatic. (But we should not underestimate the cost: mammals have lungs that are far less efficient than the lungs of birds. Yet there is no obvious way for mammals to evolve a different lung architecture.)
  • We have limited the rigidity when we stopped relying universally on SQL as the standard interface to access data. In the web era, we create services that we typically access via HTTP requests. So the rigidity does not have to propagate to the whole of a large organization.

Credit: Thanks to Antonio Badia and Peter Turney for providing me with references and insights for this post.

Quiet Earth: New on Blu-ray and DVD! November 21, 2017

Here's what's new on Blu this week! From acclaimed filmmakers Josh and Benny Safdie comes Good Time, a hypnotic, adrenaline-fueled crime thriller.

In a career-defining role, Robert Pattinson stars as Connie Nikas, who embarks on a twisted one-night odyssey through the city's underworld in a desperate and dangerous attempt to get his brother Nick (Benny Safdie) out of jail.


[Continued ...]

Recent additions: hs-functors

Added by MatthewFarkasDyck, Tue Nov 21 20:07:46 UTC 2017.

Functors from products of Haskell and its dual to Haskell

Open Culture: See the First Photograph of a Human Being: A Photo Taken by Louis Daguerre (1838)

You’ve likely heard the reason people never smile in very old photographs. Early photography could be an excruciatingly slow process. With exposure times of up to 15 minutes, portrait subjects found it impossible to hold a grin, which could easily slip into a pained grimace and ruin the picture. A few minutes represented marked improvement on the time it took to make the very first photograph, Nicéphore Niépce’s 1826 “heliograph.” Capturing the shapes of light and shadow outside his window, Niépce’s image “required an eight-hour exposure,” notes the Christian Science Monitor, “long enough that the sunlight reflects off both sides of the buildings.”

Niépce’s business and inventing partner is much more well-known: Louis-Jacques-Mandé Daguerre, who went on after Niépce’s death in 1833 to develop the Daguerreotype process, patenting it in 1839. That same year, the first selfie was born. And the year prior Daguerre himself took what most believe to be the very first photograph of a human, in a street scene of the Boulevard du Temple in Paris. The image shows us one of Daguerre’s early successful attempts at image-making, in which, writes NPR’s Robert Krulwich, “he exposed a chemically treated metal plate for ten minutes. Others were walking or riding in carriages down that busy street that day, but because they moved, they didn’t show up.”

Visible, however, in the lower left quadrant is a man standing with his hands behind his back, one leg perched on a platform. A closer look reveals the fuzzy outline of the person shining his boots. A much finer-grained analysis of the photograph shows what may be other, less distinct figures, including what looks like two women with a cart or pram, a child’s face in a window, and various other passersby. The photograph marks a historically important period in the development of the medium, one in which photography passed from curiosity to revolutionary technology for both artists and scientists.

Although Daguerre had been working on a reliable method since the 1820s, it wasn’t until 1838, the Metropolitan Museum of Art explains, that his “continued experiments progressed to the point where he felt comfortable showing examples of the new medium to selected artists and scientists in the hope of lining up investors.” Photography’s most popular 19th century use—perhaps then as now—was as a means of capturing faces. But Daguerre’s earliest plates “were still life compositions of plaster casts after antique sculpture,” lending “the ‘aura’ of art to pictures made by mechanical means.” He also took photographs of shells and fossils, demonstrating the medium’s utility for scientific purposes.

If portraits were perhaps less interesting to Daguerre’s investors, they were essential to his successors and admirers. Candid shots of people moving about their daily lives as in this Paris street scene, however, proved next to impossible for several more decades. What was formerly believed to be the oldest such photograph, an 1848 image from Cincinnati, shows what appears to be two men standing at the edge of the Ohio River. It seems as though they’ve come to fetch water, but they must have been standing very still to have appeared so clearly. Photography seemed to stop time, freezing a static moment forever in physical form. Blurred images of people moving through the frame expose the illusion. Even in the stillest, stiffest of images, there is movement, an insight Eadweard Muybridge would make central to his experiments in motion photography just a few decades after Daguerre debuted his world-famous method.

Related Content:

The First Photograph Ever Taken (1826)

See The First “Selfie” In History Taken by Robert Cornelius, a Philadelphia Chemist, in 1839

Eadweard Muybridge’s Motion Photography Experiments from the 1870s Presented in 93 Animated Gifs

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness


See the First Photograph of a Human Being: A Photo Taken by Louis Daguerre (1838) is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Giveaway: OLOW x Jean Jullien – “Club Dimanche” Capsule Collection

ScreenAnarchy: Giveaway: Win THE LIMEHOUSE GOLEM on DVD

RLJ Entertiament released Juan Carlos Medina's period thriller The Limehouse Golem on DVD at the beginning of the month. We've been a little lax giving stuff away this month, tis the season, but it is better late than never to give away something to our faithful readership.    The city of London is gripped with fear as a serial killer – dubbed The Limehouse Golem – is on the loose and leaving cryptic messages written in his victim’s blood.  With few leads and increasing public pressure, Scotland Yard assigns the case to Inspector Kildare (Bill Nighy) – a seasoned detective with a troubled past and a sneaking suspicion he’s being set up to fail.  Faced with a long list of suspects, including music hall star...

[Read the whole post on]

Open Culture: Watch “Alike,” a Poignant Short Animated Film About the Enduring Conflict Between Creativity and Conformity

From Barcelona comes "Alike," a short animated film by Daniel Martínez Lara and Rafa Cano Méndez. Made with Blender, an open-source 3D rendering program, "Alike" has won a heap of awards and clocked an impressive 10 million views on Youtube and Vimeo. A labor of love made over four years, the film revolves around this question: "In a busy life, Copi is a father who tries to teach the right way to his son, Paste. But ... What is the correct path?" To find the answer, they have to let a drama play out. Which will prevail? Creativity? Or conformity? It's an internal conflict we're all familiar with. 

Watch the film when you're not in a rush, when you have seven unburdened minutes to take it in. "Alike" will be added to our list of Free Animations, a subset of our collection, 1,150 Free Movies Online: Great Classics, Indies, Noir, Westerns, etc..

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

via Design Taxi

Related Content:

The Employment: A Prize-Winning Animation About Why We’re So Disenchanted with Work Today

Bertrand Russell & Buckminster Fuller on Why We Should Work Less, and Live & Learn More

Charles Bukowski Rails Against 9-to-5 Jobs in a Brutally Honest Letter (1986)

William Faulkner Resigns From His Post Office Job With a Spectacular Letter (1924)

Watch “Alike,” a Poignant Short Animated Film About the Enduring Conflict Between Creativity and Conformity is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

ScreenAnarchy: LIFECHANGER: New Shapeshifter From Justin McConnell Begins Production

Our friend Justin McConnell is back in the director's chair. Taking a hiatus from doing film things for other film people he is back in creative control on his new shape-shifting horror/thriller Lifechanger.    Production began last week, so we are a little late to the party announcing the new project. We hope to be visiting the set near the end of the shoot next month, and will report back to you in kind. Pictured above is Justin's best side as he gazes upon, it looks like Elitsa Bako in the monitor.    Justin McConnell’s LIFECHANGER begins production in Toronto Production continues until early December   The shape-shifting horror/thriller LIFECHANGER, written and directed by Justin McConnell, has begun production in Toronto.   Justin McConnell’s (Broken...

[Read the whole post on]

Colossal: New Porcelain Vessels Densely Layered in Leaf Sprigs and Other Botanical Forms by Hitomi Hosono

Ceramicist Hitomi Hosono (previously) creates porcelain vessels layered in hundreds of leaf sprigs and other botanical forms. These monochromatic elements are based on plants Hosono encounters during walks through East London’s greenery. “It is my intention to transfer the leaf’s beauty and detail into my ceramic work,” she explains, using it as my own language to weave new stories for objects.”

Her technique is inspired by Jasperware, a type of stoneware covered in thin ceramic reliefs invented by Josiah Wedgwood in the late 18th century. Like Wedgwood, she carefully applies her delicate forms to a porcelain base. From start to finish a larger work will take Hosono nearly a year and a half to complete. Much of this time is spent drying, as her densely layered works often need 10-12 months to completely dry.

Hosono’s solo exhibition, Reimagining Nature: Hitomi Hosono’s Memories in Porcelain, is currently on view at the Daiwa Anglo-Japanese Foundation in London through December 15, 2017. You can see more of her layered botanical sculptures on the artist’s website and through her gallery Adrian Sassoon.

ScreenAnarchy: Cinema One Originals 2017 Review: CHANGING PARTNERS Satisfyingly Delivers Catharsis

In the first minutes of Changing Partners, Agot Isidro’s Alex (don’t be confused, there’ll be two Alex’s here — that’s kind of the concept of the whole film) expresses her excitement over watching the new season of her favorite prime-time musical soap opera.   Her much younger boyfriend Cris (Sandino Martin, one of the Cris’s, that is) finds the notion of characters breaking into song quite absurd. Isidro’s Alex explains that when characters burst into song, it is so emotions can be more untethered and thus more felt — the tempered voice inside one's self breaking free. Then a few minutes later, both this Alex and Cris start belting high notes on the magic and peculiarity of their love.   With this first scene, there’s a...

[Read the whole post on]

Perlsphere: Perl 5 Porters Mailing List Summary: November 13th-20th

Hey everyone,

Following is the p5p (Perl 5 Porters) mailing list summary for the past week.


November 13th-20th

News and Updates

Perl 5.27.6 has been released!

Karl Williamson updates about his branch for word-at-a-time searches for UTF-8 invariants. His branch provides up to 800% improvement in speed on 64 bit.

Grant Reports

  • Dave Mitchell TPF Grant 2 September report.
  • Dave Mitchell TPF Grant 2 weekly report #184.
  • Dave Mitchell TPF Grant 2 weekly report #185.
  • Zefram 2017 Week 44 report.
  • Zefram 2017 Week 45 report.
  • Zefram 2017 Week 46 report.


New Issues

Resolved Issues

Rejected Issues

  • Perl #3270: No check whether operators are overloaded to lvalue functions.
  • Perl #92704: Inconsistent proto warnings.
  • Perl #115858: Perl_debug_log and Perl_error_log macro handles must be cached to avoid multiple evaluation.
  • Perl #115860: multiple evaluation problems in Perl_nextargv.
  • Perl #121553: perlbug should offer to execute a mailto link.
  • Perl #122122: [PATCH] PERL_UNUSED_CONTEXT audit.
  • Perl #132443: Cygwin::win_to_posix_path() fails, possible memory corruption.
  • Perl #132448: Carp quoting issue.

Suggested Patches

Steve Hay provided a patch in Perl #123113 to add optional GCC-only support for using long doubles on Win32.

Steve also provided a patch in Perl #125827 to PathTools to not require() modules in subs likely to be in loops.

Hauke D. provided a patch in Perl #132475 to handle LAYER argument in Tie::StdHandle::BINMODE.


Paul Evans and Zefram discuss (Signature parsing compiler functions) an API for parsing signatures.

Zefram has a suggestion of utilizing the smart match syntax to support type checks in signatures.

Zefram also suggested moving signatures syntax to square brackets ([]).

ScreenAnarchy: Gareth Evans Developing London Gangland Series For HBO Offshoots

According to a report from Deadline The Raid's Gareth Evans is developing a gangland drama series for HBO's Cinemax and Sky Atlantic. The series will be called Gangs of London and is being developed by Evans and his long time cinematographer Matt Flannery.   The drama, which will launch in 2019, is set in contemporary London as it is becoming torn apart by power struggles involving a number of international gangs. The series begins as the head of one criminal gang is assassinated and the power vacuum threatens the fragile peace between the other underworld organisations.   Evans said he hoped the show would bring a “cinematic viewing experience” into U.S and UK homes.   “It has been a thrilling experience to leap into longform...

[Read the whole post on]

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Wait a sec

Click here to go see the bonus panel!

Signalling virtue by being virtuous? Isn't that cheating?

New comic!
Today's News:

Old man Weinersmith shakes his cane at thee.

Colossal: Quirky Cartoon Toys and Vases Carved from Wood by Yen Jui-Lin

Taiwanese artist Yen Jui-Lin carves delightful cartoon-like figures from wood that are almost guaranteed to bring a smile to your face. Some of the pieces function as flower vases or key hooks, while many of the objects are one-off toys that he gives to his children as gifts. You can see many more on Jui-Lin’s Facebook page. (via Lustik)

Planet Haskell: Yesod Web Framework: mega-sdist: the mega repo helper

Many years ago, I wrote a utility called mega-sdist to help me with managing mega repos (more on that below). I've been using it myself ever since, making some minor improvements over the years. But I realized recently that I never really announced it to others, and especially not to the people whom it would help the most: other Yesod contributors and maintainers. Consider this the (massively belated) announcement.

You can find the most up-to-date information in the project on Github. Below is the current content of that file, to help save you a click.

This is a utility written to address the specific needs in maintaining Haskell "mega-repos," or Git repositories containing multiple Cabal projects. It is intended to ease the process of deciding which packages need to be released and tagging those releases appropriately.

It provides the following functionality:

  • Detect when local code has changed from what's on Hackage
    • Note that, due to Hackage revisions, sometimes this logic isn't perfect
  • Detect when a version number needs to be updated
  • Dump the difference between the Hackage version of your package and the local version

To install it... well, listen. This tool is intended for people authoring Haskell packages. Odds are, you already know how to do this. And if you don't know, this probably isn't a tool that will help you. Anyway, in order to install it, first install Stack and then run stack install mega-sdist, or just stack install inside this repository.

Opinionated tool

This utility is highly opinionated in some ways, e.g.:

  • It only supports one style of Git tag name: packagename/version. This may look weird in non-mega-repos, where v1.2.3 looks better than foo/1.2.3, but for mega-repos the former doesn't make sense.
  • It depends on Stack for both discovering all of your local packages, and for uploading to Hackage.

If you're OK with these opinions, keep reading for usage.

Have I changed anything?

Let's say I'm working on the monad-unlift megarepo (chosen as an example of a relatively small repo). I've merged some PRs recently, or at least think I have. But I don't remember which of the individual packages within the repo this affected. Instead of looking at the commit history like some caveman, I'll typically do:

$ git pull # make sure I have all latest changes
$ mega-sdist

The mega-sdist command will:

  • Build tarballs for all local packages
  • Check what the latest versions of my packages on Hackage are
  • Do a full diff on these two things and see if anything's changed

At the time of writing, here's the output from this repo:

The following packages from Hackage have not changed:

The following packages require a version bump:

What this means is:

  • The monad-unlift package I have locally is at version 0.2.0. And it perfectly matches that version on Hackage. No actions necessary.
  • The monad-unlift-ref package I have locally is at version 0.2.1. And it doesn't match the code on Hackage. Therefore, if I wanted to run stack upload monad-unlift-ref successfully, I'd need to bump the version number.

What did I change?

Well, again, if I wanted to see what changed, I could run (again, like a caveman):

$ git diff monad-unlift-ref/0.2.1 -- monad-unlift-ref

But that's long! mega-sidst's got your back. Just run:

$ mega-sdist monad-unlift-ref --get-diffs

This will print out the difference between the tarball uploaded to Hackage and what you have locally. Besides my tongue-in-cheek comment above, this is also useful if, for some reason, you either don't have or don't trust the tags in your Git repo.

One other thing: this diff is currently based on the pristine tarball from Hackage, ignoring cabal file revisions. So the difference may be slightly different from what you'd get from stack unpack monad-unlift-ref-0.2.1. But ¯\_(ツ)_/¯ that's revisions for you.

The default behavior of mega-sdist is to look at all packages specified in your stack.yaml. Targets can be any directory. And mega-sdist will automatically look at packages in any subdirectory, so that mega-sdist . is the same as mega-sdist at the root of your repo*.

* Assuming all of your packages are actually in your repo, but only crazy people would do otherwise.

Preparing a new release

OK, now I continue working on my project, and I've:

  • Made some changes to monad-unlift
  • Updated the cabal file's version number
    • And of course I also updated the, I'm not some monster

From the root of my repo, I run:

$ mega-sdist monad-unlift

Or, equivalently, from inside the monad-unlift subdirectory I run:

$ mega-sdist .

Either way, I get:

The following new packages exist locally:

No version bumps required, good to go!

This tells me that my package has local changes, and the version number has been updated, so that stack upload monad-unlift will work. Neato! Now, you could just run stack upload ..., but here's what I usually do. First, I'll review the changes I'm about to upload and make sure there are no surprises:

$ mega-sdist --get-diffs .

The following new packages exist locally:
diff -r old/monad-unlift-0.2.0/ new/monad-unlift-0.2.1/
> ## 0.2.1
> * Silly changes
diff -r old/monad-unlift-0.2.0/Control/Monad/Trans/Unlift.hs new/monad-unlift-0.2.1/Control/Monad/Trans/Unlift.hs
> -- I just need some space
diff -r old/monad-unlift-0.2.0/monad-unlift.cabal new/monad-unlift-0.2.1/monad-unlift.cabal
< version:             0.2.0
> version:             0.2.1

No version bumps required, good to go!

OK, that's what I wanted. Time to release. Next, I'm going to use mega-sdist to tag the release:

$ mega-sdist --gittag .

From the root of my repo, this would notice that monad-unlift-ref still requires a version bump, and refuse to proceed. But inside the monad-unlift directory, it notices that all necessary version bumps are done, and happily tags:

$ mega-sdist --gittag .
The following new packages exist locally:

No version bumps required, good to go!
Raw command: git tag monad-unlift/0.2.1

And suddenly I notice something new:

$ ls tarballs/

Neat, mega-sdist left behind tarballs I can upload! To do so, I run:

$ stack upload tarballs/*

Note that this will work whether I'm trying to upload just one package, or all of the updated packages in my repo. Finally, I need to push the new tags to Github (or wherever):

$ git push --tags

And in fact, this upload sequence is so common that I have a shell alias set up:

$ alias upload
alias upload='mega-sdist --gittag . && stack upload tarballs/* && git push --tags'

So there you have it: convenient little utility to help manage repos with lots of packages in them.

Open Culture: 60-Second Introductions to 12 Groundbreaking Artists: Matisse, Dalí, Duchamp, Hopper, Pollock, Rothko & More

Some art historians dedicate their entire careers, and indeed lives, to the work of a single artist. But what about those of us who only have a minute to spare? Addressing the demand for the briefest possible primers on the creators of important art, paintings and otherwise, of the past century or so, the Royal Academy of Arts' Painters in 60 Seconds series has published twelve episodes so far. Of those informationally dense videos, you see here the introductions to Salvador Dalí, Marcel Duchamp, Edward Hopper, Jackson Pollock, and Mark Rothko.

Though short, these crash courses do find their way beyond the very basics. "There's more to Dalí," says the Royal Academy of the Arts' Artistic Director Tim Marlow, than "skillfully rendered fever dreams of sex and decay.

He painted one of the twentieth century's great crucifixions, but it's more about physics than religion, and he was as influenced by philosophy as he was by Sigmund Freud." Duchamp's unorthodox and influential ideas "came together in one of the most ambitious works of the 20th century, The Large Glass, an endlessly analyzed work of machine-age erotic symbolism, science, alchemy, and then some."

In the seemingly more staid Depression-era work of Edward Hopper, Marlow points to "a profound contemplation of the world around us. Hopper slows down time and captures a moment of stillness in a frantic world," painted in a time of "deep national self-examination about the very idea of Americanness." Hopper painted the famous Nighthawks in 1942; the next year, and surely on the very other end of some kind of artistic spectrum, Hopper's countryman and near-contemporary Jackson Pollock painted Mural, which shows "the young Pollock working through Picasso, continuing to fracture the architecture of cubism" while "at the same time taking on the lessons of the Mexican muralists like Siqueiros and Orozco."

Yet Mural also "starts to proclaim an originality that is all Pollock's," opening the gateway into his heroic (and well-known) "drip period." Rothko, practicing an equally distinctive but entirely different kind of abstraction, ended up producing "some of the most moving paintings in all of the 20th century: saturated stains of color." Making reference to classical architecture — going back, even, to Stonehenge — his work becomes "a kind of threshold into which you, the viewer, project yourself," but its soft edges also give it a sense of "breathing, pulsating, and sometimes, of dying."

If you happen to have more than a minute available, how could you resist digging a bit deeper into the life and work of an artist like that? Or perhaps you'd prefer to get introduced to another: Henri Matisse or Grant Wood, say, or Kazimir Malevich or Joan Mitchell. You may just find one about whom you want to spend the rest of your years learning.

See all videos, including new ones down the road, at the Painters in 60 Seconds series playlist.

Related Content:

Edward Hopper’s Iconic Painting Nighthawks Explained in a 7-Minute Video Introduction

Jackson Pollock 51: Short Film Captures the Painter Creating Abstract Expressionist Art

Hear Marcel Duchamp Read “The Creative Act,” A Short Lecture on What Makes Great Art, Great

Walk Inside a Surrealist Salvador Dalí Painting with This 360º Virtual Reality Video

An Introduction to 100 Important Paintings with Videos Created by Smarthistory

Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

60-Second Introductions to 12 Groundbreaking Artists: Matisse, Dalí, Duchamp, Hopper, Pollock, Rothko & More is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

things magazine: Shoe leather

Why Amazon’s ‘last mile’ is such a grind for those who have to walk it. A world that has created devices like the Flexbot6 / London’s Architectural Association threatens major cuts / Gurafiku, a tumblr about graphic design in Japan … Continue reading

The Shape of Code: Grace Hopper: Manager, after briefly being a programmer

In popular mythology Grace Hopper is a programmer who wrote one of the first compilers. I think the reality is that Hopper did some programming, but quickly moved into management; a common career path for freshly minted PhDs and older people entering computing (Hopper was in her 40s when she started); her compiler management work occurred well after many other compilers had been written.

What is the evidence?

Hopper is closely associated with Cobol. There is a lot of evidence for at least 28 compilers in 1957, well before the first Cobol compiler (can a compiler written after the first 28 be called one of the first?)

The A-0 tool, which Hopper worked on as a programmer in 1951-52, has been called a compiler. However, the definition of Compile used sounds like today’s assembler and the definition of Assemble used sounds like today’s link-loader (also see: section 7 of Digital Computers – Advanced Coding Techniques for Hopper’s description of what A-2, a later version, did).

The ACM’s First Glossary of Programming Terminology, produced by a committee chaired by Hopper in June 1954.

Routine – a set of coded instructions arranged in proper sequence to direct the computer to perform a desired operation or series of operations. See also Subroutine.

Compiler (Compiling Routine) – an executive routine which, before the desired computation is started, translates a program expressed in pseudo-code into machine code (or into another pseudo-code for further translation by an interpreter). In accomplishing the translation, the compiler may be required to:

Assemble – to integrate the subroutines (supplied, selected, or generated) into the main routine, i.e., to:

    Adapt – to specialize to the task at hand by means of preset parameters.

    Orient – to change relative and symbolic addresses to absolute form.

    Incorporate – to place in storage.

Hopper’s name is associated with work on the MATH-MATIC and ARITH-MATIC Systems, but her name does not appear in the list of people who wrote the manual in 1957. A programmer working on these systems is likely to have been involved in producing the manual.

After the A-0 work, all of Hopper’s papers relate to talks she gave, committees she sat on and teams she led, i.e., the profile of a manager.

Michael Geist: SOCAN Financial Data Highlights How Internet Music Streaming is Paying Off for Creators

Music industry lobby groups may frequently seek to equate the Internet with lost revenues, but an examination of financial data from one of Canada’s largest music copyright collectives demonstrates massive growth in earnings arising from Internet streaming including major services such as Youtube and Apple Music. While many collectives do not publicly disclose their revenues, SOCAN, which represents composers, songwriters, and music publishers, provides a detailed breakdown of revenues and distributions in its annual report.

The reports show that since the 2012 copyright reform in Canada, SOCAN has experienced incredible growth in Internet streaming revenues. The 2013 SOCAN annual report noted that it was the first year that the collective distributed Internet streaming revenues ($3.4 million in revenue), which coincided with a performing rights licence for Youtube and an agreement that made it easier to members to receive additional money for music posted to the video site. Tracking the growth of revenues through the annual reports for 2014 ($12.4 million), 2015 ($15.5 million) and 2016 ($33.8 million), Internet streaming revenue is now SOCAN’s fastest growing revenue source having overtaken cinema, private copying, and satellite radio revenues and likely to surpass concert revenues in the coming year.


SOCAN Internet Streaming Revenues Source: SOCAN Annual Reports 2013, 2014, 2015, 2016


SOCAN is just one music copyright collective and there are others that seek royalties for other participants in the music creation process. Indeed, the debate over Internet music streaming revenues is complex with many rights holders vying for revenues in a fast-growing segment of the market. Yet despite attempts to paint the Internet as a source of disappearing revenues for creators, the publicly-availability data tells a different story.

In the case of songwriters, composers, and music publishers, the data unmistakable: in the aftermath of the 2012 copyright reforms, SOCAN has generated a 10X increase in Internet streaming revenues with growth rates of over 100 per cent over the past year alone. That isn’t a value gap. It is enormous economic value being generated for the benefit of creators and those that invest in them under current Canadian copyright rules.

The post SOCAN Financial Data Highlights How Internet Music Streaming is Paying Off for Creators appeared first on Michael Geist.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Influential Voices: An Interview with Evan Pricco of Juxtapoz

A conversation with the Editor-in-Chief of one of the most influential art publications of our generation.

Blog – Free Electrons: Back from ELCE: award to Free Electrons CEO Michael Opdenacker

The Embedded Linux Conference Europe 2017 took place at the end of October in Prague. We already posted about this event by sharing the slides and videos of Free Electrons talks and later by sharing our selection of talks given by other speakers.

During the closing session of this conference, Free Electrons CEO Michael Opdenacker has received from the hands of Tim Bird, on behalf of the ELCE committee, an award for its continuous participation to the Embedded Linux Conference Europe. Indeed, Michael has participated to all 11 editions of ELCE, with no interruption. He has been very active in promoting the event, especially through the video recording effort that Free Electrons did in the early years of the conference, as well as through the numerous talks given by Free Electrons.

Michael Opdenacker receives an award at the Embedded Linux Conference Europe

Free Electrons is proud to see its continuous commitment to knowledge sharing and community participation be recognized by this award!

Planet Haskell: Philip Wadler: Pay what you want for Java Generics and Collections

Humble Book Bundle is selling off a passle of Java books, including Java Generics and Collection by Naftalin and Wadler, on a pay-what-you-want basis (USD $1 minimum), DRM-free. You choose what proportion of the profits go to Humble and what goes to the charity Code for America. A great deal!

OCaml Weekly News: OCaml Weekly News, 21 Nov 2017

  1. Welcome new maintainers of opam repository, and introducing Obi
  2. Proj4 0.9.2 release - 4.06.0 compatible
  3. Tierless Web programming in ML
  4. Stdcompat, a compatibility module for OCaml standard library
  5. ppx_deriving_protocol 0.8
  6. amqp-client 1.1.4
  7. PhD position in Design and implementation of programming languages for embedded vision systems (with Caml inside)
  8. Ecaml: OCaml Emacs plugins tutorial
  9. Ocaml Github Pull Requests
  10. Other OCaml News

CreativeApplications.Net: Déguster l’augmenté – Adding new dimensions to food

'Déguster l’augmenté' is a collaborative project by Erika Marthins with ECAL (Bachelor Media & Interaction Design) that questions if food could be augmented and technology provide a new dimension to how we experience a meal.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: “Astronomia Nova” by Artists Faith47 & Lyall Sprong

An incredible lunar hologram by artists Faith 47 and Lyall Sprong. The immersive installation, staged in the forests of Sweden, pays tribute to the ancient internal rhythms and vast external forces that define us. See more images and video of “Astronomia Nova” by Chop Em Down Films below!



Open Culture: Colorful Maps from 1914 and 2016 Show How Planes & Trains Have Made the World Smaller and Travel Times Quicker

This time of year especially, we complain about the greed and arrogance of airlines, the confusion and inefficiency of airports, and the sardine seating of coach. But we don’t have to go back very far to get a sense of just how truly painful long-distance travel used to be. Just step back a hundred years or so when—unless you were a WWI pilot—you traveled by train or by ship, where all sorts of misadventures might befall you, and where a journey that might now take several dull hours could take several dozen, often very uncomfortable, days. Before railroads crossed the continents, that number could run into the hundreds.

In the early 1840s, for example, notes Simon Willis at The Economist’s 1843 Magazine, “an American dry-goods merchant called Asa Whitney, who lived near New York, travelled to China on business. It took 153 days, which he thought was a waste of time.” It’s probably easier to swallow platitudes about destinations and journeys when the journey doesn’t take up nearly half the year and run the risk of cholera. By 1914, the explosion of railroads had reduced travel times considerably, but they remained at what we would consider intolerable lengths.

We can see just how long it took to get from place to place in the “isochronic map” above (view it in a large format here), which visualizes distances all over the globe. The railways “were well-established,” notes Gizmodo, “in Europe and the U.S., too, making travel far more swift than it had been in the past.” One could reach “the depths of Siberia” from London in under ten days, thanks to the Trans-Siberian Railway. By contrast, in Africa and South America, “any travel inland from the coast took weeks.”

The map, created by royal cartographer John G. Bartholomew, came packaged with several other such tools in An Atlas of Economic Geography, a book, Willis explains, “intended for schoolboys,” containing “everything a thrusting young entrepreneur, imperialist, trader or traveller could need.” All of the distances are measured in “days from London,” and color-coded in the legend below. Dark green areas, such as Sudan, much of Brazil, inland Australia, or Tibet might take over 40 days travel to reach. All of Western Europe is accessible, the map promises, within five days, as are parts of the east coast of the U.S., with parts further Midwest taking up to 10 days to reach.

What might have seemed like wizardry to Walter Raleigh probably sounds like hell on earth to business class denizens everywhere. How do these journeys compare to the current age of rapid air travel? Rome2rio, a “comprehensive global trip planner,” aimed to find out by recreating Bartholomew’s map, updated to 2016 standards. You can see, just above (or expanded here), the same view of the world from its onetime imperialist center, London, with the same color-coded legend below, “Distances in Days from London.” And yet here, a journey to most places will take less than a day, with certain outer reaches—Siberia, Greenland, the Arctic Circle, stretching into two, maybe three.

Should we have reason to complain, when those of us who do travel—or who must—have it so easy compared to the danger, boredom, and general unpleasantness of long-distance travel even one-hundred years ago? The question presumes humans are capable of not complaining about travel. Such complaint may form the basis of an ancient literary tradition, when heroes ventured over vast terrain, slaying monsters, solving riddles, making friends, lovers, and enemies…. The epic dimensions of historic travel can seem quaint compared to the sterile tedium of airport terminals. But just maybe—as in those long sea and railway voyages that could span several months—we can discover a kind of romance amidst the queasy food courts, tacky gift shops, and motorized moving walkways.

via  1843 Magazine

Related Content:

A Colorful Map Visualizes the Lexical Distances Between Europe’s Languages: 54 Languages Spoken by 670 Million People

Download 67,000 Historic Maps (in High Resolution) from the Wonderful David Rumsey Map Collection

The Roman Roads of Britain Visualized as a Subway Map

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

Colorful Maps from 1914 and 2016 Show How Planes & Trains Have Made the World Smaller and Travel Times Quicker is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: A Futuristic New Library in China Features Illusion of Endless Bookshelves

Dutch design firm MVRDV has completed the Tianjin Binhai Library in China. Made in collaboration with local architects TUPDI, the 34, 000 square meter space features an otherworldly spherical structure in the centre and mesmerizing undulating bookshelves. It can also hold up to 1.2 million books! If you’re wondering how you’d even reach some of them, many of the covers in the hall are printed images of real books (stored elsewhere in the building), which just adds to the sci fi vibe of the whole thing as the real question becomes which books are real and which are fake! See more images below. Comic for 2017.11.21

New Cyanide and Happiness Comic

Penny Arcade: News Post: The Beef Below

Tycho: Jesus!  I can’t fucking get it together today.  There is a tremendous amount of subconscious filing going on in the waxy, hive-like substructures of my consciousness and it’s interfering with attempts to do anything but lie in bed and watch The Punisher.  Is this a recognized condition?  Maybe it’s new. It’s hard to imagine that, just a couple weeks ago, I was at a supremely delicious PAX Aus and now I’m trying to integrate the entirely new PAX Unplugged into my globetrotting lifestyle.  Here’s the strip from the Make-A-Strip panel,…

Penny Arcade: News Post: Happy Birthday Penny Arcade!

Gabe: I think it’s worth mentioning that Penny Arcade turned 19 years old on Saturday. All this started on November 18th 1998 which seems like ancient history at this point. I can still remember sitting in a chinese restaurant in Spokane with Kara and Jerry as he and I wrote that very first strip. Penny Arcade has changed a lot over the years but I have to say I have never been happier to go to work than in the last year or so. I’m not sure how many people can say that after working at a job for nearly 20 years but I feel incredibly fortunate that I get to do this every day. So I just want to…

OCaml Planet: Migration to GitHub is complete

After several years of using GitHub specifically for its pull request system, the Coq development team has migrated the Coq bug tracker and Cocorico, the Coq wiki to GitHub as well.

More information about the migration of the Coq bug tracker may be found in this blog post.

More information about the migration of Cocorico, the Coq wiki, may be found on this wiki page.

Finally, the GitHub repository is now the repository we push to (as opposed to a mirror). Make sure that your git clone is tracking to be always up-to-date.

s mazuk: paperbackben: Spock Messiah! by Theodore R. Cogswell and...


Spock Messiah! by Theodore R. Cogswell and Charles A. Spano, Jr.

Cover by Gene Szafran

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: Unleashed

Being paleo, I’m no Bitcoin buddy. But you can’t ignore this thing. There’s a growing mob of people convinced the future of money will not be like the past. No fiat currency. No central banks, No government controls. No interest rates. No printing presses. No cash tied to political decisions or interference. As a result Bitcoin, and other rival cryptocurrencies, have spawned a Wild West of finance. Compared to this, real estate in Toronto or Vancouver is a giant, boring, brain-dead GIC.

The latest news comes as Bitcoin advanced to $8,000 US. That’s a 700% gain this year, taking the total value of the invisible, digital, unbankable currency to $130 billion. Along the way there has been extreme volatility – three recent plunges of 25% each, enough too freak out even the most iron-gutted investor.

Despite this, the number of supporters is ramping up daily. Surprisingly, now the doomers have signed on. Some people say Bitcoin is sucking off a lot of support from the traditional hedge-against-paper-money, which is gold.

Wow. Look at this chart. Gold’s comatose. Bitcoin is on fire.

Of course, why someone who thinks the world might blow up would put all their faith in money that only exists if you have a good Internet connection and a charged-up phone is curious. Our gossamer grid might be the first piece of infrastructure to fail. Go figure.

But the appeal of digital money to those who trust no elected person, national government, central banker or multinational, globalist corporation is obvious. Bitcoins are hard to mine, unlike the way the US government prints trillions of dollars, for example, or the Fed simply balloons its balance sheet. A limited supply is intended to maintain value – but it also means when demand for the stuff increases (like now) the worth of each digital unit inflates wildly.

Bitcoin isn’t actually money you can possess and hoard. Instead your pile is kept in a virtual wallet forming part of an online payments network which is completely decentralized. That’s called the blockchain – at the heart of the up-yours mentality which created digital currency. Anonymous. Sovereign. Free.

Bitcoiners (and the lovers of other forms now rivaling it) look at the way the world almost went off a cliff during the credit crisis and have decided they want nothing to do with a system that can ricochet, domino-like into utter mayhem. At least that was the genesis. Now speculators, traders and institutions are piling on, with the smell of quick profits in their nostrils.

Next month, for example, Bitcoin futures will start trading on the world’s biggest exchange. There are now bitcoins backed by gold. Could it be long before we see one tied to dollars? Will digital money ultimately be co-opted by global financial institutions who themselves are looking for relief from capricious national governments and paternalistic central banks? With free trade having broken barriers everywhere, a single protectionist leader like Trump has the potential to wreak havoc. Maybe non-political, uncontrollable, non-inflationary money is the answer – freed from the jerking around that one nation’s interest rate policy can create.

So, yes, Bitcoin has a future. But we’re not there yet. And investing in it poses extreme risk.

The very thing that attracts – the absence of manipulation – is what also makes digital money toxic to average investors. Fiat currency (the stuff we use every day to finance our lives) is tightly controlled by central banks to smooth out wild gyrations affecting its value. They do this by expanding or contracting the overall money supply, adjusting the cost of money (interest rates), actively fighting inflation and deflation and working with governments to issue and control public debt. Just look at what the Fed has done since 2008. Astonishing. And witness what an engineered collapse in interest rates did to Canadian housing prices. Stunning.

As a paleo, bitcoins are not in my portfolio. The volatility’s too much. The downside too great. But this week the Canada Pension Plan Investment Board revealed it has a staff of 100 people watching, analyzing and possibly preparing to jump in.

Imagine that. Your CPP dollars, out there, braless, untamed, astride blistering neurons with survivalists and nomads. Now I need a scotch.

new shelton wet/dry: It was put in the newses what he did, nicies and priers, the King fierceas Humphrey, with illysus distilling, exploits and all

The Food and Drug Administration has approved the first digital pill for the US which tracks if patients have taken their medication. The pill called Abilify MyCite, is fitted with a tiny ingestible sensor that communicates with a patch worn by the patient — the patch then transmits medication data to a smartphone app [...]

Open Culture: What Made Freddie Mercury the Greatest Vocalist in Rock History? The Secrets Revealed in a Short Video Essay

I wasn’t always a Queen fan. Having cut my music fan teeth on especially downbeat, miserable bands like Joy Division, The Cure, and The Smiths, I couldn’t quite dig the unabashed sentimentality and operatic bombast. Like one of the “Kids React to Queen” kids, I found myself asking, “What is this?” What turned me around? Maybe it was the first time I heard Queen’s theme song for Flash Gordon. The 1980 space opera is most remarkable for Max von Sydow’s turn as Ming the Merciless, and for those bursts of Freddie Mercury and his mates’ multi-tracked voices, explosions of syncopated angel song, announcing the coming of the eighties with all the high camp of Rocky Horror and the rock confidence of Robert Plant.

As a frontman Mercury had so much more than the perfect style and stance—though he did own every stage he set foot on. He had a voice that commanded attention, even from mopey new wave teenagers vibrating on Ian Curtis’s frequency. What makes Mercury's voice so compelling—as most would say, the greatest vocalist in all of rock history? One recent scientific study concluded that Mercury’s physical method of singing resembled that of Tuvan throat singers.

He was able to create a faster vibrato and several more layers of harmonics than anyone else. The video above from Polyphonic adds more to the explanation, quoting opera soprano Montserrat Caballé, with whom Mercury recorded an album in 1988. In addition to his incredible range, Mercury “was able to slide effortlessly from a register to another,” she remarked. Though Mercury was naturally a baritone, he primarily sang as a tenor, and had no difficulty, as we know, with soprano parts.

Mercury was a great performer—and he was a great performative vocalist, meaning, Caballé says, that “he was selling the voice…. His phrasing was subtle, delicate and sweet or energetic and slamming. He was able to find the right colour or expressive nuance for each word.” He had incredible discipline and control over his instrument, and an underrated rhythmic sensibility, essential for a rock singer to convincingly take on rockabilly, gospel, disco, funk, and opera as well as the blues-based hard rock Queen so easily mastered. No style of music eluded him, except perhaps for those that call for a certain kind of vocalist who can’t actually sing.

That’s the rub with Queen—they were so good at everything they did that they can be more than a little overwhelming. Watch the rest of the video to learn more about how Mercury’s superhuman vibrato produced sounds almost no other human can make; see more of Polyphonic’s music analysis of one-of-a-kind musicians at our previous posts on Leonard Cohen and David Bowie’s final albums and John Bonham’s drumming; and just below, hear all of those Mercury qualities—the vibrato, the perfect timing, and the expressive performativity—in the isolated vocal track from “I Want to Break Free” just below.

Related Content:

Scientific Study Reveals What Made Freddie Mercury’s Voice One of a Kind; Hear It in All of Its A Cappella Splendor

Watch Behind-the-Scenes Footage From Freddie Mercury’s Final Video Performance

Queen Documentary Pays Tribute to the Rock Band That Conquered the World

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

What Made Freddie Mercury the Greatest Vocalist in Rock History? The Secrets Revealed in a Short Video Essay is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

TheSirensSound: New single Pyro by Stale Lane

Finnish alternative rock group Stale Lane has released new single and music video 'Pyro'. Since late 1990's on various line-ups as in MALfUNCTION musicians Tomi and Sami played loads of shows and recorded various of demos until MALfUNCTION called it quit on 2007.

After a couple of silent years, the former bandmates started to make music together again with the idea being not so much in forming a band but more collaborative and incorporating the many friends the guys have come to know over the years. This became STALE LANE. And their first single "Pyro" is here.

TheSirensSound: New single You Don't See Me by The Tambourine Girls

The Tambourine Girls are a four-piece band from Sydney, Australia. Formed by ex-Deep Sea Arcade guitarist Simon Relf, the band released their debut EP 'The End of Time' in 2014, and followed up with their debut, self-titled LP in 2016. Frontman Simon Relf has toured extensively as a solo act supporting Dustin Tebbutt and Megan Washington, and the band are set to release their second LP in early 2018. "You Don't See Me" is the first single from that second LP. The album is the first ever to be recorded at Golden Retriever Studios in Sydney, run by Simon Berckelman (The Philadelphia Grand Jury) and was engineered by Tim Whitten (The Go Betweens, Lupa J, The Church, Augie March) and mixed by Nick Franklin (Australia, Richard In Your Mind, Matt Corby). "This track is my take on Bob Dylan's 'Talkin World War III Blues.' There's a line in there that could save the world if it were considered more often: "I don't blame him too much though he didn't know me," explains Simon. The song has a driving, robotic rhythm contrasted with sweeping synth-like guitars and a loose, almost conversational vocal melody. The tension is released in the final verse with the bittersweet resolution: "I think you loved me completely in darkness, so that's where I'll be.""You Don't See Me" was released through MGM on 11/17. with an intimate NSW launch gig at Sneaky Possum in Chippendale on 11/23.

TheSirensSound: New album Beasts in binary city by mandala eyes

The New album "beasts in binary city" by Mandala eyes is an all improvised electric guitar + looper, underlaid by also improvised electronic beats. The story is about wildness struggling to exist in a reductive environment, a reflection of his internally difficult time living in a city in middle America over the past year.

New Humanist Blog: Who's afraid of diversity in education?

There appears to be an old-fashioned backlash against any challenge to a lack of diversity in academia.

Colossal: LEGOs Snap Into Place in Hintlab’s Line of Playful Rings and Earrings

Paris-based design duo Hintlab amplifies the nostalgia tied to Lego bricks by bringing the classic children’s toy to an older audience. Their line of earrings and rings are made to house small, interchangeable bricks, allowing their customers to customize their look depending on their mood or whim. Each piece of 3D-printed jewelry comes with a set of ten objects that can be either worn as a singular setting or stacked to create a multi-layer work.

Hintlab has also developed a line of jewelry that fits flush in its setting. The color and shape of the flat bricks still reflect the feeling of Lego, but are housed in a more minimal package. You can buy your own interchangeable set on the group’s Etsy, and see past designs on their Instagram. (via Designboom)

Planet Haskell: Mark Jason Dominus: Mathematical jargon for quibbling

Mathematicians tend not to be the kind of people who shout and pound their fists on the table. This is because in mathematics, shouting and pounding your fist does not work. If you do this, other mathematicians will just laugh at you. Contrast this with law or politics, which do attract the kind of people who shout and pound their fists on the table.

However, mathematicians do tend to be the kind of people who quibble and pettifog over the tiniest details. This is because in mathematics, quibbling and pettifogging does work.

Mathematics has a whole subjargon for quibbling and pettifogging, and also for excluding certain kinds of quibbles. The word “nontrivial” is preeminent here. To a first approximation, it means “shut up and stop quibbling”. For example, you will often hear mathematicians having conversations like this one:

A: Mihăilescu proved that the only solution of Catalan's equation is .

B: What about when and are consecutive and ?

A: The only nontrivial solution.

B: Okay.

Notice that A does not explain what “nontrivial” is supposed to mean here, and B does not ask. And if you were to ask either of them, they might not be able to tell you right away what they meant. For example, if you were to inquire specifically about , they would both agree that that is also excluded, whether or not that solution had occurred to either of them before. In this example, “nontrivial” really does mean “stop quibbling”. Or perhaps more precisely “there is actually something here of interest, and if you stop quibbling you will learn what it is”.

In some contexts, “nontrivial” does have a precise and technical meaning, and needs to be supplemented with other terms to cover other types of quibbles. For example, when talking about subgroups, “nontrivial” is supplemented with “proper”:

If a nontrivial group has no proper nontrivial subgroup, then it is a cyclic group of prime order.

Here the “proper nontrivial” part is not merely to head off quibbling; it's the crux of the theorem. But the first “nontrivial” is there to shut off a certain type of quibble arising from the fact that 1 is not considered a prime number. By this I mean if you omit “proper”, or the second “nontrivial”, the statement is still true, but inane:

If a nontrivial group has no subgroup, then it is a cyclic group of prime order.

(It is true, but vacuously so.) In contrast, if you omit the first “nontrivial”, the theorem is substantively unchanged:

If a group has no proper nontrivial subgroup, then it is a cyclic group of prime order.

This is still true, except in the case of the trivial group that is no longer excluded from the premise. But if 1 were considered prime, it would be true either way.

Looking at this issue more thoroughly would be interesting and might lead to some interesting conclusions about mathematical methodology.

  • Can these terms be taxonomized?
  • How do mathematical pejoratives relate? (“Abnormal, irregular, improper, degenerate, inadmissible, and otherwise undesirable”) Kelley says we use these terms to refer to “a problem we cannot handle”; that seems to be a different aspect of the whole story.
  • Where do they fit in Lakatos’ Proofs and Refutations theory? Sometimes inserting “improper” just heads off a quibble. In other cases, it points the way toward an expansion of understanding, as with the “improper” polyhedra that violate Euler's theorem and motivate the introduction of the Euler characteristic.
  • Compare with the large and finely-wrought jargon that distinguishes between proofs that are “elementary”, “easy”, “trivial”, “straightforward”, or “obvious”.
  • Is there a category-theoretic formulation of what it means when we say “without loss of generality, take ”?

[ Addendum: Kyle Littler reminds me that I should not forget “pathological”. ]

Penny Arcade: Comic: The Beef Below

New Comic: The Beef Below

Quiet Earth: The Quietcast: BEYOND SKYLINE with Director Liam O'Donnell [Interview]

[Editor's note: You can now subscribe to Quiet Earth's podcast on iTunes or via RSS!]

On this episode of The Quietcast, I speak with writer/director Liam O'Donnell about his debut feature Beyond Skyline. This is, of course, the sequel to the Brothers Strause's original alien invasion film Skyline, which, believe it or not, came out 7 years ago this month!

In the almost hour-long conversation, Liam opens up about how the first film landed [Continued ...]

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - A Vicious Cycle

Click here to go see the bonus panel!

I'm not sure why, but I find the idea that the bicycle has a switchblade to be comedy gold.

New comic!
Today's News:

Blog – Free Electrons: Linux 4.14 released, Free Electrons contributions

Penguin from Mylène Josserand

Drawing from Mylène Josserand,
based on a picture from Samuel Blanc under CC-BY-SA

Linux 4.14, which is going to become the next Long Term Supported version, has been released a week ago by Linus Torvalds. As usual, did an interesting coverage of this release cycle merge window, highlighting the most important changes: The first half of the 4.14 merge window and The rest of the 4.14 merge window.

According to Linux Kernel Patch statistics, Free Electrons contributed 111 patches to this release, making it the 24th contributing company by number of commits: a somewhat lower than usual contribution level from our side. At least, Free Electrons cannot be blamed for trying to push more code into 4.14 because of its Long Term Support nature! 🙂

The main highlights of our contributions are:

  • On the RTC subsystem, Alexandre Belloni made as usual a number of fixes and improvements to various drivers, especially the ds1307 driver.
  • On the NAND subsystem, Boris Brezillon did a number of small improvements in various areas.
  • On the support for Marvell platforms
    • Antoine Ténart improved the ppv2 network driver used by the Marvell Armada 7K/8K SoCs: support for 10G speed and TSO support are the main highlights. In order to support 10G speed, Antoine added a driver in drivers/phy/ to configure the common PHYs in the Armada 7K/8K SoCs.
    • Thomas Petazzoni also improved the ppv2 network driver by adding support for TX interrupts and per-CPU RX interrupts.
    • Grégory Clement contributed some patches to enable NAND support on Armada 7K/8K, as well as a number of fixes in different areas (GPIO fix, clock handling fixes, etc.)
    • Miquèl Raynal contributed a fix for the Armada 3700 SPI controller driver.
  • On the support for Allwinner platforms
    • Maxime Ripard contributed the support for a new board, the BananaPI M2-Magic. Maxime also contributed a few fixes to the Allwinner DRM driver, and a few other misc fixes (clock, MMC, RTC, etc.).
    • Quentin Schulz contributed the support for the power button functionality of the AXP221 (PMIC used in several Allwinner platforms)
  • On the support for Atmel platforms, Quentin Schulz improved the clock drivers for this platform to properly support the Audio PLL, which allowed to fix the Atmel audio drivers. He also fixed suspend/resume support in the Atmel MMC driver to support the deep sleep mode of the SAMA5D2 processor.

In addition to making direct contributions, Free Electrons is also involved in the Linux kernel development by having a number of its engineers act as Linux kernel maintainers. As part of this effort, Free Electrons engineers have reviewed, merged and sent pull requests for a large number of contributions from other developers:

  • Boris Brezillon, as the NAND subsystem maintainer and MTD subsystem co-maintainer, merged 68 patches from other developers.
  • Alexandre Belloni, as the RTC subsystem maintainer and Atmel ARM platform co-maintainer, merged 32 patches from other developers.
  • Grégory Clement, as the Marvell ARM platform co-maintainer, merged 29 patches from other developers.
  • Maxime Ripard, as the Allwinner ARM platform co-maintainer, merged 18 patches from other developers.

This flow of patches from kernel maintainers to other kernel maintainers is also nicely described for the 4.14 release by the Patch flow into the mainline for 4.14 article.

The detailed list of our contributions: Comic for 2017.11.20

New Cyanide and Happiness Comic

Ansuz - mskala's home page: Tsukurimashou 0.10

I've posted version 0.10 of the Tsukurimashou Project - the first new version in more than three years. As you may or may not recall, this is my ongoing effort to build a parametric meta-family of Japanese-language typefaces, using the METAFONT technology of TeX. A brief summary of the concept is that instead of just drawing the character glyphs for the fonts, I'm writing software to generate the fonts; then different weights and styles come out by choosing different settings.

TheSirensSound: New track Population Grave by Laytcomers

The Laytcomers are a collective of ambitious losers, trapped inside a small Bay Area garage with a 4-track. They produce a mixture of noise rock, post-punk and even some elements of twee and Kiwi underground, all showcased on their new track "Population Grave"

"The first time we tried to come up with songs was when me (Ilya) and two of our other friends, who were original members of The Laytcomers, came to Cyes' place in Davis, where he went to college," Ilya explains. "I was hoping we were going to write some music, but at that time no one was really motivated to do anything, until I actually sat down with a cheap microphone myself and played/sang (pretty terribly), forcing everyone to contribute. Most of the sounds from that time were almost unlistenable. For drums we used a Guitar Hero drum set, which we used to write some of our first songs (Coldfront, Coppertone). There weren't a clear idea who does what, I was switching between bass and acoustic guitar, both of which I could barely even play at the time."

"We came a long way in terms of musical development and taste. In recent years our project The Laytcomers started to gain its own identity especially after we got two new members from Craigslist (Sam on saxophone and John on drums). We still walk a line between catchy indie-rock, based on short catchy bass lines and guitar riffs, psychedelic sounds and ear-bleeding noise rock with amp feedback and all kinds of noises."

"Our title song Population Grave wasn't coming along for awhile. We recorded original drum/vocal/guitar version almost five years ago. Something wasn't working out in it until I actually rethought the bassline from the scratch. It turned into a catchy angular post-punk thingy with roots in the no wave and New Zealand noise-rock."

TheSirensSound: New album Outlive Your Body by Firesuite

Formed by Chris Anderson, with Chris Minor, Richard Storer & Stuart Longden. Firesuite are an ongoing musical endeavour, born out of the desire to create something loud & hugely affecting. Combining polar opposite dynamics, from total white noise, to moments of striking beauty, Firesuite are as complex a unit as you are likely to find out.

"Firesuite was started as a tribute to my little brother, Daniel. That's him on the front cover of You're An Ocean Deep, My Brother. That album was written as a way to reach him somehow. Much of the material on it was written in the immediate aftermath of his death. Outlive Your Body is still very much infused with his memory, but was written more as a way to make him proud.

Daniel introduced me to so many bands, bands I love to this day and were integral to my musical adolescence. He brought home Jeff Buckley, Deftones and Breeders albums all within a few weeks and that was that. Mission complete. I was obsessed.

I have always approached everything we have done as a band with a sense of fatalism, like it will be the very last thing we do. On this occasion, that turned out to be a self fulfilling prophecy. It became clear recording the album, high up in the hills of Sheffield, that this would be the last thing we would ever complete as a band. Given that, it became that much more important that I put everything I have into it. We made the album during a number of sessions at Old Pig Farm spread across a number of years. The project began without any clear direction, we thought we may record a couple of EP's perhaps. As time went on though, it became clear an album was forming. Slowly.

So this is the record. It was pieced together with John Sephton, who became a friend and ally throughout the process. He guided it to completion, along with a few helping hands along the way. We recruited Dave Sanderson to assist in mixing several songs, in no small part due to his incredible work on the 65 DAYS OF STATIC discography. Caroline Cawley came in to add vocals to Little Sacrifices, and Matthew Pronger added layers of brass to swell the end of Harbour. Lucy Revis and Chris Endcliffe lifted Edge Of The Earth and Lights into the upper atmosphere.

The songs are about lots of things. They are about wanting to escape (Harbour), about mine and Daniel's experiences growing up (Eulogy), about close friends (SJVL). They are also about the band coming to an end. Lights was intended to be the soundtrack to this.

I am so proud of this record. We have worked so hard to build it into the thing it is. I am very happy to share it with you, and would love if you would share it with others. I am heartbroken about the end, but if something has been worth doing, then this should be how it feels when it ends. "

Chris A 

new shelton wet/dry: Midway through the show, we realized we were sitting so close to Friday Night Lights’s gorgeous Connie Britton that we had to physically restrain ourselves from touching her hair

I know of an art historian who was asked to authenticate a work by Leonardo, and he was going to, you know, charge the normal kind of fee charged for doing this kind of thing — a low six figures. And the owner said, “No, no, no. We want to pay you a percentage of [...]

Perlsphere: From Zero to HTTPS in an afternoon

I've been hosting my own humble personal web site since 2012. I had never bothered setting up HTTPS for my domain, but after hearing about the Let's Encrypt project, I was completely out of excuses.

For the unfamiliar, Let's Encrypt offers free and fully automatic HTTPS certificates. The web cares about HTTPS now more than ever. Deeply interactive interfaces like geolocation and user media (camera, microphone) are too sensitive to trust an insecure transport. By leveraging the security features present in modern browsers, users can expect a reasonable safety from attacks that would exploit the weaknesses of HTTP.

To take the security mission even further, I decided to completely containerize my server and expose only a couple of ports. Using a Docker composition made it very easy to deploy up-to-date nginx and keep it isolated from the rest of my host shard.

The first mission was to set up certificates with certbot, the EFF's free certificate tool. certbot has a plugin that writes nginx configuration for you, but in this case I didn't want nginx installed on my host at all. Instead of following the nginx-specific instructions for my platform, I opted for the webroot plugin to just give me a certificate and let me figure out how to set it up. A certbot invocation and a few seconds later I have certificates for my site in /etc/letsencrypt/live/

Next I went shopping for nginx Docker images. The official nginx image has everything I want: the latest and greatest mainline nginx based on stable Debian. I considered the Alpine variant, but felt like Debian was a better choice for me; familiarity outweighs a few tens of MB of image size.

The nginx image ships with a default configuration serving a single root directory over HTTP. Since HTTPS was the point of this experiment, I set out to correct this. I started by creating a project directory on the host to house all the configuration needed to build out my server. Then I started up a container with the vanilla configuration and copied config files /etc/nginx/nginx.conf and /etc/nginx/conf.d/default.conf out of the container to the project directory. With those config files now in my possession, I created a simple Dockerfile to inject them into a new image based on the library nginx image.

  FROM nginx:latest

COPY nginx.conf /etc/nginx/nginx.conf
COPY default.conf /etc/nginx/conf.d/default.conf

With that out of the way, I started hacking config to create my ideal HTTPS server. First I set up a redirect to force all traffic to the HTTPS site.

  server {
    listen 80;
    return 301 https://$server_name$request_uri;

Now I would need a good way to get my certificates into the container. Docker compose has a handy secrets directive to make this really painless.

  version: '3.1'  # must be at least 3.1 for secrets feature
    file: "/etc/letsencrypt/live/"
    file: "/etc/letsencrypt/live/"
    container_name: "nginx"
    build: "."
    - "80:80"
    - "443:443"
    - "/home/matt/htdocs:/usr/share/nginx/html:ro"
    restart: "on-failure"
    - "ssl_privkey"
    - "ssl_fullchain"

This mounts the provided secrets in /run/secrets to be scooped up by the site config.

  server {
    listen 443 ssl http2 default_server;

    ssl_certificate /run/secrets/ssl_fullchain;
    ssl_certificate_key /run/secrets/ssl_privkey;


Now I can update my server by running docker-compose build --pull and then docker-compose up -d. This may cause a momentary outage while the containers are being swapped, but for a personal site this is nothing to sweat over. I dropped these commands in a cron script since I like updates but would rather not have to think about updates.

With my new HTTPS site now exposed to the world, I found some free HTTPS validation tools to check my work and optimize the configuration a few notches beyond the "pretty good" nginx defaults. If you've deployed an HTTPS site for work or pleasure, check out the collection of web security tools rounded up by Phin in September.

I was really happy with the Let's Encrypt tools and user experience. If you're still hosting HTTP with no HTTPS option, consider using the free tools to get your free certificate and help your users protect their privacy. If you're interested in using HTTPS wherever available, consider using the HTTPS Everywhere extension offered by the EFF.

things magazine: Making things happen

Various things. paintings by Yoko Akino / 13 Horror Films for Architecture and Urban Design enthusiasts / Jony Ive on Apple Park / interactive novation fun / car models by Stephane Dufrene / Ministry Assistant is an app designed for … Continue reading

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: Causation

American stocks are over-valued by about 20%. So, a correction could carve 4,000 or 5,000 points from the Dow. Gulp. But, it’s not going to happen. At least not soon. There’s much more to come – great for those with careful US exposure and gnarly for the value of your house.

US corporations are on a role with robust, rising profits. Given the expectations for earnings in early ’18, stocks don’t look so nose-bleed after all. Then there’s the Trump tax cut. If business taxes fall from 35% to 20%, roughly 10% is added to the bottom lines of most organizations. Yep, up she goes. Then there’s unemployment, as we used to call it. Now its name is “full employment” – exactly how economists define it when 96% of the people who want jobs find them. In fact there are currently more vacant job openings in American than jobless Americans to fill them.

This gets better, says Goldman Sachs economists. The 4.1% rate will actually wither to 3.5% by the end of 2018. We haven’t seen that for almost 60 years, when people drove four-ton cars with fins on them. Meanwhile, inflation’s back. After the quasi-deflation and huge central bank bail-outs of the Obama years, prices and wages are rising again. So the Fed is cutting back on monetary stimulus.

What it means: US interest rates have increased four times in 12 months and will likely go up again in December. In 2018 (which starts in a few weeks) the consensus opinion is for three more increases, although Goldman said on Friday it sees four. The Fed rate – which was at zero little more than a year ago – will be 2% or more. Yes, a big deal. But a growing, robust, broad-based economy like that of the US can absorb higher inflation and rising rates when wages are also growing along with the job market.

What it means to investors: All good, basically. As the largest economy in the world expands in an orderly fashion, with rising corporate profits, broadening employment and buoyant consumer confidence, Americans continue to stuff equities into the 401k accounts (RRSP equivalents) and markets advance. Could the Dow ever hit 30,000? Of course. Expect some corrections along the way, but there’s nothing on the radar now to signal a crash, reversal, crisis or general OMG moment. In other words, a 20% drop between now and 2018 would be a big surprise. And a bigger buying opportunity.

Meanwhile it’s no serious coincidence most other markets – the UK, Britain, emerging nations, Japan, for example – are also bouncing around at record levels. The wusses among us recoil and say, “everything is in a bubble. We’re truly pooched this time.” But in reality the deflationary low-rate, low-growth, low-inflation switch was flipped about a year ago (yes, with Trump), and we’re now into the next phase. Higher prices. Improving incomes, at least in the States. Fatter equity values. A return to inflation. And swelling  interest rates.

What it means to houses: There’s no way the Fed hikes rates three or four times in 2018 (plus once next month) and our guys stay idle. The Bank of Canada rate will increase at least twice and more likely three times in the next 12 months. That will add at half or three-quarters of a point to all loans. HELOCs (about $280 billion are outstanding, most of them variable rate) will cost about 4%, as will five-year mortgages. Given the universal stress test, in place in about two weeks, buyers must qualify a year from now (or sooner) at about 6%. This is a 300% increase from early 2017.

If you don’t think that matters because your spouse is still house-lusty, immigrants are teeming in with bags of money, there’s no more land and everybody wants to live exactly where you are, look at this chart:

BMO economics simply charted the start of interest rate increases earlier this year, and the net impact on real estate values. The chart also shows you what happened to housing hormones back in 2015 when the Bank of Canada rashly chopped rates twice. As this pathetic blog has been yapping about the past few years, the correlation between the cost of money and property values is absolute and irrefutable. So guess what happens when people have to qualify at 6%?

It was instructive a couple of weeks ago when CMHC exec Michel Tremblay said, “the dream of home ownership may be fading for many Canadians.” Tremblay did not stop there. He suggested long-term renting might be a better option. CHMC. Imagine.

Maybe it’s started. Mid-November resale numbers in the GTA were awful. And new house sales have plunged by two-thirds as the price of a freshly-built home in the distant burbs soars past $1.1 million. The people rushing to ‘beat’ the stress test, to borrow more now than they’ll qualify for next year, give this blog its name. 2018 could be epic.


Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Pareto Romantic

Click here to go see the bonus panel!

You're the best you can be, which is technically the highest true compliment I can pay you, so why do you look sad?

New comic!
Today's News:

Perlsphere: Meta::Hack 2

Meta::Hack 2 - 2017


Meta::Hack is about getting the core MetaCPAN team together for a few days to work on improving... well as much as possible! Last year we focused on deploying to a new infrastructure, with new version of Elasticsearch. This year was a much more varied set of things we wanted to achieve.

Why get together?

Whilst Olaf couldn't attend in person, we had him up on the big screen in the ServerCentral (who kindly hosted us and bought us lunch) offices so it was almost as good as him being physically there. Having us together meant we could really support each other as questions arose.. a fun one was tracking down that the string JSON::PP::Boolean, incorrectly identifies as is_bool in JSON::PP - there is a pull request - though that's not released yet. We also found bugs in our own code!



I spent a lot of my time with Brad, who has been setting up logging and visualisation, I setup Kibana readly for when we get the data and I reviewed some of Graham's changes that will make the logs from our applications easier to control. Brad got most of it working and hopes to finish it off in the next couple of weeks. This should give us much better visability of any errors, allow us to load balance better and also give us an overview of our infrastrcture in a what we don't currently have.

Panopta, a monitoring service who kindly donate us an account sent along one of their engeneers, Shabbir, who talked us through some of the features that would be useful to us.


The biggest thing that is visible so far is the autocomplete on the site is now SO much better thanks to the work of Joel and Mickey, this was something Mickey and I started a year or so ago, but they've taken it much further and included user favourites to help boost what you are most likely to want in an autocomplete.

At Graham's request I've converted all Plack apps to run under Gazelle which is faster.

I also spent some time cleaning up our infrastructure, moving some sites off an old box (still running our old code from 2+ years ago as a backup) which has then been brought up to using the same versions of everything as the other boxes.

LiquidWeb - who host most of our production hardware came to our resque, not once, but twice. With some reindexing - it looks like we heated up the servers to the point that over the 4 days 2 of them rebooted themselves at various points! LiquidWeb responded quickly each time, replacing the power supply in both and a fan in one of them.

Some of the other smaller bits I worked in included automating purging of our Elasticsearch snapshots (which I set running last year). I created some S3 buckets for Travis CI artifact storage (our cpanfile.snapshot built from PR runs).


Jetlag has it's own sort of fun, so my days have been starting at 4:30, with 3 hours of hacking from the hotel, before heading out for breakfast and then to the ServerCentral offices... for a productive morning. Usually by 3pm though my brain just freezes and my fingers are tired... but I'm just starting to get used to it... which is because I'm heading home tonight!


This wouldn't have been possible without our sponsors:, cPanel, ServerCentral, Kritika

Perlsphere: Perl 6 Performance and Reliability Engineering: November 2017

As part of the Perl 6 core development fund, Jonathan Worthington has completed another 200 hour block of hours, and his report of what was completed follows the break.

Many thanks to the TPF sponsors of this and other grants. If you're interested in supporting work like this, please donate:

Grant Completion Report: Perl 6 Performance and Reliability Engineering

At the end of July, I was granted a 200 hour extension of my Perl 6 Performance and Reliability Engineering grant. I used this time primarily to focus on MoarVM's dynamic optimizer, although did many other fixes and improvements aside from that.

Background on the dynamic optimizer improvements

Modern runtimes, especially for dynamic languages, rely on dynamic optimization techniques. These analyze the behavior of the program at runtime, and use that data to drive optimization. MoarVM's dynamic optimizer is typically referred to as "the specializer", or "spesh" for short, and this nicely captures its core strategy: taking code with lots of potential dynamism, and producing one or more specialized versions of the code with as much of that dynamic behavior stripped out as possible.

The specializer was planned as part of MoarVM from the start, although its implementation came after the initial public release of the VM. Soon after that, the focus switched to the Christmas release of Perl 6, where nailing down semantics was a much bigger focus than optimization. Since then, the specializer was improved in numerous ways, however various limitations in its design started to show up repeatedly as we analyzed Perl 6 program performance. Furthermore, stress testing showed up a range of bugs in the optimizer that had potential to cause incorrect behavior in user code.

Therefore, for the sake of both performance and reliability, it was desirable to invest some time improving the specializer.

Specialization in the background

Modern hardware is parallel, and it is desirable to find ways to take advantage of that. Originally, the specializer would do optimization work on the same thread that user code was running on. This not only paused execution in order to do optimization, but it also meant that multiple threads running the same code (say, in a data parallel program) would all race to do and install the same optimization.

I introduced a background thread for performing specializations. This not only meant that optimization work and JIT compilation would not interupt execution, but also resolved the issue of multiple threads scrambling to do the same optimization. Since there was now only one thread producing optimizations, some locking logic could also go away. A further upshot of this is that even a single-threaded user program can now get some advantage from multi-core hardware.

One downside of this is that the exact timing of specializations being installed becomes unpredictable, and this can make debugging more difficult. Therefore, I added a deterministic mode, enabled by environment variable, which makes a thread pause while the optimization thread does its work. This, for a single-threaded user program, offers deterministic behavior.

Better data for better decisions

The specializer's decision making about what to optimize, and how to optimize it, will only be as good as the data available to it. The data model prior to my work under this grant was rather limiting. For example, it was not possible to get a high level overview of what was monomorphic code (same types always), polymorphic code (handful of different types) and megamorphic code (loads of different types). There were also too few samples to know if a type that was seen to differ once was really rare or not. When there are only ten or so samples, and a type differs one time, then it could vary up to 10% of the time; that will tend to be too costly to leave deoptimization to handle. By contrast, if there are a hundred samples and it happens one time, then it is much safer to leave that slow path to be handled by the interpreter, for the sake of running the common case potentially a lot faster.

I implemented a lightweight interpreter logging mechanism. It writes compact logs about encountered types, invocation targets, and more into a sequential thread-local buffer. When the buffer is filled, it is sent to the specialization thread. Over on that thread, the recorded events are played back in a stack simulation, and a data model built up that aggregates the statistics. This is then used by a planner to decide what optimizations to produce.

Along the way, I introduced a new kind of specialization, which specializes only on the shape of the callsite (how many arguments and which named arguments) rather than the incoming types. This means that megamorphic code (that is, code called on many different types) can now receive some optimization, as well as compilation into machine code. Before, a few specializations were produced, and then everything else was left to run slowly under the interpreter.

New optimizations

Besides allowing for better decision making, I introduced some new optimizations as well as making existing ones more powerful.

  • I enabled many more calls to be inlined (a powerful optimization where a call to a small routine is replaced with the code of the routine itself). This was achieved by using the new statistics to spot when the target of a call was reliably the same, and introducing a guard clause. Prior to this, inlining was only available to methods resolved through the cache or subs in the setting or the outermost scope. I also handled the case where the passed arguments were consistently typed, but it had not been possible for the optimizer to prove that, again using guard clauses.
  • I implemented inlining of closures (that is, code that refers to variables in an outer scope).
  • I made dead code removal happen far more eagerly, and improved the quality of type information available in code following the eliminated conditional. This is a significant improvement for parameters with default values, as well as branches based on types or constants.
  • I made frames that are reliably promoted from the call stack onto the heap be allocated right away on the heap, to save the promotion cost. (This promotion happens when a callframe needs to be referenced by a heap object.)
  • I changed the way that control exception's flow is represented to be more accurate, enabling elimination of handlers that become unreachable once the code they cover also becomes unreachable. The change also resulted in more accurate type information propagation, which can aid other optimizations.
  • I made the optimization that rewrites control exceptions into far cheaper goto instructions apply into inlines.

Specializer fixes

The specializer usually only operates on "hot" code, so that the time that it spends optimizing will have maximum benefit. However, it is possible to set an environment variable that lowers these thresholds, making the specializer try to optimize every bit of code that is encountered. This, combined with the deterministic mode, provides a means to stress test the optimizer, by forcing it to consider and optimize a great deal more code that it usually would do. Running the NQP and Rakudo builds , together with the Perl 6 test suite, in this way can flush out bugs that would not show up when only optimizing hot code.

Prior to my work under this grant, failures would show up from this stress testing as soon as the NQP build. After a good amount of bug hunting and fixing, the NQP build and tests, together with the Rakudo build and basic tests, are completely clean under this stresstest. The handful of remaining failures in the stresstest are a result of inlining distorting backtraces (at the moment, the inlined frames do not appear in the backtrace), thus causing some error-reporting tests to fail.

The fixes included addressing a long-standing but rarely occurring crash involving the intersection of specialization, multiple dispatch, and destructuring in signatures; a number of different crashes boiled down to this. Another important range of fixes involved eliminating poor assumptions around Scalars and aliasing, which aside from fixing bugs also stands us in a better position to implement escape analysis (which requires alias analysis) in the future.

Notable results from the specializer work

The specialization improvements showed up in a number of benchmarks. Two that are particularly worth calling out are:

  • The daily Text::CSV module benchmark runs hit new lows thanks to the specializer improvements.
  • The "read a million line UTF-8 file" benchmark that I've discussed before, where Rakudo on MoarVM used to be just a bit slower than Perl 5, is now won by Rakudo. This is a result of better code quality after specialization.

Improved GC thread sync-up

I re-worked the way that garbage collection synchronizes running threads, to eliminate busy-waiting. The idea of the original design was that running threads would quickly react to the GC interupt flag being set on them. However, this presumed that the threads were all really running, which is never a certainty given CPU cores are competed over by many processes. Furthermore, under tools like valgrind and callgrind, which serialize all threads onto a single CPU core, the busy-waiting strategy produced hugely distorted results, and greatly increased the time these useful, but already slow, tools would take. Now the synchronization is done using condition variables, meaning that both the kernel and tools like valgrind/callgrind have a much better idea of what is happening. While callgrind showed a large (10%-15%) reduction in CPU cycles in some multi-threaded programs, the improvements under normal running conditions were, as expected, much smaller, though still worthwhile.

Other work

Along with the improvements described above, I also:

  • Added support to Proc::Async to plumb the output handle of one process into the standard input of another process, with the handles being chained together at the file descriptor level.
  • Hunted down and fixed a SEGV when many processes were spawned and all gave an error.
  • Fixed RT #131384 (panic due to bug in error reporting path of ASCII decoder)
  • Fixed RT #131611 (sigilles variable in coercion could generate internal compiler error)
  • Fixed RT #131740 (wrong $*THREAD after await due to lack of invalidation of the dynamic variable lookup cache)
  • Fixed RT #131365 and RT #131383 (getc and readchars read too many chars at the end of the file)
  • Fixed RT #131673 (is rw with anonymous parameter reported error incorrectly)
  • Fixed MoarVM issue 611 (memory errors arising from certain usage patterns of the decode stream)
  • Fixed MoarVM issue 562 (SEGV from a particular use of the calframe(...) function)
  • Fixed native callbacks when the callback is made on a thread other than the one that passed the callback in the first place
  • Avoided a linear lookup, knocking 5% off the code-gen phase of compiling CORE.setting
  • Removed the long-unused Lexotic feature from MoarVM, which allowed some code cleanup (this used to be used to handle return, but it now uses the control exception system)


I wrote a 4-part series on my blog about the MoarVM specializer. The posts walk through the MoarVM specializer's design and functionality, and mention the many improvements done as a result of this grant - explaining why the new way of doing things represents an improvement from the previous way. The posts are:

I also travelled to the Swiss Perl Workshop and delivered a talk about the MoarVM specializer, titled "How does deoptimization help us go faster?". The slides and video were published online.


This latest grant extension enabled me to spend a significant amount of time on the MoarVM dynamic optimizer, both addressing bugs as well as overhauling the way information about program execution is collected and used. The new data allows for better decision making, and its availability allowed me to implement some new optimizations. Furthermore, I moved optimization work to take place on a background thread, so as not to interupt program execution. Aside from this work, I fixed other bugs and made some performance improvements unrelated to the dynamic optimizer. Finally, I gave a presentation about dynamic optimization at the Swiss Perl Workshop, and wrote an extensive 4-part blog series explaining the MoarVM optimization process.

s mazuk: nergaltheopossum:Excuse me, I’m a bit bow-tied up with a...


Excuse me, I’m a bit bow-tied up with a photoshoot. I love the camera so much, most of the time you’ll catch me staring directly at it. Comic for 2017.11.19

New Cyanide and Happiness Comic

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: There won’t be blood

DOUG By Guest Blogger Doug Rowat

Decades ago, when I had a few less grey hairs and counted as a good day any day that I didn’t get yelled at by an institutional trader or have to pick up his or her dry cleaning, I did some research work with my old firm’s oil & gas analysts. I remember being amazed at the time that the world actually consumed 75 million barrels of oil per day.

Today that figure seems small as global demand now flirts with the 100-million-barrel-a-day level and has continued to rise virtually uninterrupted over the past 20 years. Long-term global oil demand-growth is, of course, not spectacular averaging only about 1.5% per year, but the growth rate has been incredibly consistent regardless of the time period (1.3% and 1.7% annually over the past 10 years and five years, respectively, for example).

While the corresponding WTI oil price has been wildly volatile over the past 20 years, suggesting the need for active management in the energy space, the good years for oil can be extraordinarily profitable and the price has still averaged a reasonable 5% annual growth rate over the past two decades. Therefore it makes sense to build a long-term portfolio with at least some oil & gas exposure and we should, from an investment perspective, rid ourselves of the notion that renewable energy and electric cars will somehow loosen our grip on fossil fuel consumption any time soon.

Global Oil Consumption: An Uninterrupted Rise

Source: International Energy Agency; Bloomberg. 20-year quarterly chart

Shorter term, there are a number of positives for the oil & gas sector: a generally strong global economy, declining US inventory levels (down 14% since March), global supply and demand finally being in balance and a significant technical breakout for the WTI oil price (see chart below).

After Consolidating for 18 months, WTI Oil has made a Major Technical Breakout above Resistance


OPEC Secretary-General Mohammad Barkindo also recently stated that production cuts were the “only viable option” to restore stability to the oil market—another positive development. It’s hackneyed oil-industry wisdom, but the only cure for low oil prices is, of course, low oil prices. Typically, when oil prices are subdued there are three outcomes: 1) a reduction in spending on new projects, 2) an outright postponement of new projects or 3) production cuts—all of which lead to lower supply growth. New extraction techniques developed in the past decade, such as fracking, which led to the explosive growth in US production, signaled a major change for the oil industry and speculation became rampant that the oversupply would never work its way out of the market.

But it did (see chart below). The oil market has historically been pragmatic—albeit not always immediately so—and supply-demand has once again moved into balance. In fact, the correlation between global supply and global demand has been 97% over the past 10 years. That’s an efficient market.

Global Oil Supply & Demand: Once Again in Balance

Source: Bloomberg. While line = supply, orange line = demand. Shaded green = oversupply.

So the shorter-term fundamentals for oil remain, in our view, positive, which is one reason why we increased our clients’ exposure to Canadian equities earlier this year. But regardless of how near-term fundamentals develop, the longer-term pattern is clear: global oil consumption will rise and, despite its shorter-term inefficiencies, the oil industry eventually finds a way to create balance between supply and demand. And while it’s an industry that’s no friend to the environment, your portfolio should have oil & gas exposure. My car? Still takes gas. My house? No solar panels yet. My possessions? All made directly or indirectly with fossil fuels.

Invest in oil & gas. And if you feel guilty, buy a Prius plug-in.

Doug Rowat, FCSI® is Portfolio Manager with Turner Investments and Senior Vice President, Private Client Group, Raymond James Ltd.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Toddlers

Click here to go see the bonus panel!

Also, it's easy to mistake puberty for turning into a werewolf.

New comic!
Today's News:

new shelton wet/dry: Every day, the same, again

Detroit police officers fight each other in undercover op gone wrong The newest museum in Washington, D.C., is a $500 million institution dedicated to a single book. How Facebook Figures Out Everyone You’ve Ever Met (A man who years ago donated sperm to a couple, secretly, so they could have a child—only to have Facebook [...] Comic for 2017.11.18

New Cyanide and Happiness Comic

MattCha's Blog: 2017 Essence of Tea Yiwu GuoYouLin and Trusting Gushu Claims

These days are quite different that when I first started drinking puerh.  Back then nobody really understood what “ancient” “old tree” “gushu” really meant or how to distinguish between such stuff from even some factory produced puerh.  Yeah, it was that bad back then.  Nowadays, a lot of people have come to understand that most of these claims can be immediately disregarded.  I feel that Essence of Tea is an exception to that rule.  To me the care and strict oversight that David and Yingxi’s use to source tea = truth.  If David, says that his new Essence of Tea puerh is (fill in blank) than I have no doubt that it is (fill in blank).  I have the highest level of trust for David with his sourcing on his fresh puerh nowadays.

This kind of guarantee of authenticity however comes with a price.  This is only fair because to guarantee such claims must take an enormous about of time, energy, and effort on the part of David and Yingxi.  If I was both in a financial place where I could spend the money and was serious about acquiring young fresh pure ancient puerh of a famous producing area, I would be pre-ordering Essence of Tea’s 2018 puerh.  No questions asked.  But because I am not in such a place right now, I am simply humbled by a chance to sample such things and enjoy the experience which unfolds.  Please come along with me, sit down, and enjoy…

This tea came from a small selection of puerh trees protected in the Yiwu State Forest that were discovered in 2004.  Essence of Tea has given this tea the bold claim of “the best tea we’ve ever pressed”.  This tea sold out fast but used to go for $450.00 for a 400g cake.  It is the last of the free samples included in my last purchase from Essence of Tea (thanks again David).

Dry leaves carry a deep and rich odour of multifaceted fruits.  Concord grape odours in a slightly deep forest smell.  Many rich high notes are released into the olfactory.

First infusion has a slightly syrupy sweet fruit taste.  There is a nice crispness to the taste but also a depth to it.  The aftertaste is of high fruit tastes which don’t last too long before a sandy taste reveals itself.  Overall, this first infusion surprised me because I was expecting lots of top notes but wasn’t also expecting lots of grounding depth.  Right off the bat this tea has a full feeling to it.   The throatfeeling is subtle but deep.

The second infusion shows off more of this profile with an initial taste that is not too sweet nor that fruity but has nice slight fruit taste in a deeper profile of slight grains, forest tastes, bread tastes which comprise just as much of the taste profile, especially in the middle profile after the initial syrupy sweetness has almost disappeared.  The fruity sweetness re-appears stronger in the breath and in the aftertaste compared to the first infusion.  The mouthfeel has a nice, mild, almost sandy texture.

The third infusion has even less initial sweet taste but more of a mellow apricot bread taste to it.  The aftertaste is a thick blanketing taste of dried fruits and breads.  This tea has a very deep, heavy profile to it for such a young tea- very nice sustenance.  The mouthfeel develops a sticky, syrupy feeling to it to match the syrup taste. The throat sensation is opening, viscus, and thick.

The fourth infusion starts with slight fruits that are overpowered by a deeper foresty taste.  There is a nice spike of surgery returning sweetness before being dragged out into an apricot bread taste of fresh yeasty baking.   The bread tastes are the dominant ones, the mild initial sweetness is becoming less.  The aftertaste is of dry fruit.

The fifth is completely different which has a tingling suragry slightly cooling initial taste with a cool menthol like sweetness being more prominent here with the fruity sweetness now almost unnoticeable and this cooling pungent menthol surgary sweetness now emerging.  There is another depth to the taste which tastes of syrup.  This infusion starts to show signs of a woody taste emerging also.  The initial- and aftertaste is less exciting than the long thick middle taste.  The qi of this tea is actually very mild and is mainly felt as a light sensation in the head and a very clear mind.  It feels very unimposing in the body and is very mild and approachable for such a young puerh.

The sixth infusion is much the same with a deep evolving taste over a long profile.

The seventh infusion has a vegetal initial taste where there is a complete void of sweetness now.  That taste is the dominant which slowly gives way to some bread-like taste and a very faint returning coolness followed by bready and dried apricot fruit tastes in the aftertaste.  The aftertaste is now where the most action is.  The rest of the infusion is dominated by a slightly bitter and more standard vegetal taste.  The mouthfeel and throatfeeling is mild but full.  The throatfeeling is deep and mild achieving a nice complete but mildly stimulating sensation.

The eighth has a much smoother and harmonious taste, more blended together now- vegetal, bread, slight wood, barely dried apricot.  The mouthfeeling and throatfeeling becomes slightly drying here and is getting noticeably stronger.

The ninth infusion starts off with a dry and slightly puckery fruit notes over vegetal notes then transforms to a woody taste then to a nice bread-like returning sweetness with slight cooling.  The cooling sensation brings a wood and faint sugar notes.  Overall the profile is turning more wood-like and the mouthfeel and throatfeel start moving out of the medium-mild stimulating and more into the stronger slighty gripping stimulation at this point. 

The tenth and eleventh has a nice robust bread, wood, melon taste which is nicely blended throughout.  The taste is nicely harmonious here along with the mouth and throatfeeling feels quite satisfyingly complete.  The vegetal/ bitter note is gone leaving nice tastes left to thoroughly enjoy.

The twelfth has a grainy-cereal taste as the dominant here.  There are woody, syrupy edges to it here but it is grain tasting throughout.  The mouthfeel is mild to medium full- the throatfeel is deep and light. 

The thirteenth and fourteenth has much the same tastes.  There emerges a fruity mild returning sweetness in these later infusions.  Still very much enjoyable here.

The fifteenth and sixteen becomes even more mild but still enjoyable and harmonious.  I am still doing only flash infusions here with no need to add any extra time to enjoy this tea which I think could potentially just make it bitter and dry.  This tea has great stamina.  Overall, this tea has a very mild qi and very little bodyfeel.

The seventieth and eighteenth become more mild but still not really bitter and are enjoyed.

If acquiring true, pure gushu is the top criteria for your puerh buying then look no further than Essence of Tea.  However, based on taste/smell, qi, bodyfeeel/ mouthfeel alone, this tea would not be worth it for the average person.  Once providence is added to the equation and if you are one who values such things, this puerh is immediately worth it- especially in a climate where such things are becoming harder to actually and undeniably verify.


Steepster’s Tasting Notes and Commentary

Planet Haskell: Sandy Maguire: Type-Directed Code Generation

<article> <header>

Type-Directed Code Generation


<time>November 18, 2017</time>

aka “Type-Level Icing Sugar”


At work recently I’ve been working on a library to get idiomatic gRPC support in our Haskell project. I’m quite proud of how it’s come out, and thought it’d make a good topic for a blog post. The approach demonstrates several type-level techniques that in my opinion are under-documented and exceptionally useful in using the type-system to enforce external contracts.

Thankfully the networking side of the library had already been done for me by Awake Security, but the interface feels like a thin-wrapper on top of C bindings. I’m very, very grateful that it exists, but I wouldn’t expect myself to be able to use it in anger without causing an uncaught type error somewhere along the line. I’m sure I’m probably just using it wrong, but the library’s higher-level bindings all seemed to be targeted at Awake’s implementation of protobuffers.

We wanted a version that would play nicely with proto-lens, which, at time of writing, has no official support for describing RPC services via protobuffers. If you’re not familiar with proto-lens, it generates Haskell modules containing idiomatic types and lenses for protobuffers, and can be used directly in the build chain.

So the task was to add support to proto-lens for generating interfaces to RPC services defined in protobuffers.

My first approach was to generate the dumbest possible thing that could work – the idea was to generate records containing fields of the shape Request -> IO Response. Of course, with a network involved there is a non-negligible chance of things going wrong, so this interface should expose some means of dealing with errors. However, the protobuffer spec is agnostic about the actual RPC backend used, and so it wasn’t clear how to continue without assuming anything about the particulars behind errors.

More worrisome, however, was that RPCs can be marked as streaming – on the side of the client, server, or both. This means, for example, that a method marked as server-streaming has a different interface on either side of the network:

serverSide :: Request -> (Response -> IO ()) -> IO ()
clientSide :: Request -> (IO (Maybe Response) -> IO r) -> IO r

This is problematic. Should we generate different records corresponding to which side of the network we’re dealing with? An early approach I had was to parameterize the same record based on which side of the network, and use a type family to get the correct signature:

{-# LANGUAGE DataKinds #-}

data NetworkSide = Client | Server

data MyService side = MyService
  { runServerStreaming :: ServerStreamingType side Request Response

type family ServerStreamingType (side :: NetworkSide) input output where
  ServerStreamingType Server input output =
      input -> (output -> IO ()) -> IO ()

  ServerStreamingType Client input output =
      forall r. input -> (IO (Maybe output) -> IO r) -> IO r

This seems like it would work, but in fact the existence of the forall on the client-side is “illegally polymorphic” in GHC’s eyes, and it will refuse to compile such a thing. Giving it up would mean we wouldn’t be able to return arbitrarily-computed values on the client-side while streaming data from the server. Users of the library might be able to get around it by invoking IORefs or something, but it would be ugly and non-idiomatic.

So that, along with wanting to be backend-agnostic, made this approach a no-go. Luckily, my brilliant coworker Judah Jacobson (who is coincidentally also the author of proto-lens), suggested we instead generate metadata for RPC services in proto-lens, and let backend library code figure it out from there.

With all of that context out of the way, we’re ready to get into the actual meat of the post. Finally.

Generating Metadata

According to the spec, a protobuffer service may contain zero or more RPC methods. Each method has a request and response type, either of which might be marked as streaming.

While we could represent this metadata at the term-level, that won’t do us any favors in terms of getting type-safe bindings to this stuff. And so, we instead turn to TypeFamilies, DataKinds and GHC.TypeLits.

For reasons that will become clear later, we chose to represent RPC services via types, and methods in those services as symbols (type-level strings). The relevant typeclasses look like this:

class Service s where
  type ServiceName    s :: Symbol

class HasMethod s (m :: Symbol) where
  type MethodInput       s m :: *
  type MethodOutput      s m :: *
  type IsClientStreaming s m :: Bool
  type IsServerStreaming s m :: Bool

For example, the instances generated for the RPC service:

service MyService {
  rpc BiDiStreaming(stream Request) returns(stream Response);

would look like this:

data MyService = MyService

instance Service MyService where
  type ServiceName    MyService = "myService"

instance HasMethod MyService "biDiStreaming" where
  type MethodInput       MyService "biDiStreaming" = Request
  type MethodOutput      MyService "biDiStreaming" = Response
  type IsClientStreaming MyService "biDiStreaming" = 'True
  type IsServerStreaming MyService "biDiStreaming" = 'True

You’ll notice that these typeclasses perfectly encode all of the information we had in the protobuffer definition. The idea is that with all of this metadata available to them, specific backends can generate type-safe interfaces to these RPCs. We’ll walk through the implementation of the gRPC bindings together.

The Client Side

The client side of things is relatively easy. We can the HasMethod instance directly:

    :: HasMethod s m
    => s
    -> Proxy m
    -> MethodInput s m
    -> IO (Either GRPCError (MethodOutput s m))
runNonStreamingClient =  -- call the underlying gRPC code

    :: HasMethod s m
    => s
    -> Proxy m
    -> MethodInput s m
    -> (IO (Either GRPCError (Maybe (MethodOutput s m)) -> IO r)
    -> IO r
runServerStreamingClient =  -- call the underlying gRPC code

-- etc

This is a great start! We’ve got the interface we wanted for the server-streaming code, and our functions are smart enough to require the correct request and response types.

However, there’s already some type-unsafety here; namely that nothing stops us from calling runNonStreamingClient on a streaming method, or other such silly things.

Thankfully the fix is quite easy – we can use type-level equality to force callers to be attentive to the streaming-ness of the method:

    :: ( HasMethod s m
       , IsClientStreaming s m ~ 'False
       , IsServerStreaming s m ~ 'False
    => s
    -> Proxy m
    -> MethodInput s m
    -> IO (Either GRPCError (MethodOutput s m))

    :: ( HasMethod s m
       , IsClientStreaming s m ~ 'False
       , IsServerStreaming s m ~ 'True
    => s
    -> Proxy m
    -> MethodInput s m
    -> (IO (Either GRPCError (Maybe (MethodOutput s m)) -> IO r)
    -> IO r

-- et al.

Would-be callers attempting to use the wrong function for their method will now be warded off by the type-system, due to the equality constraints being unable to be discharged. Success!

The actual usability of this code leaves much to be desired (it requires being passed a proxy, and the type errors are absolutely disgusting), but we’ll circle back on improving it later. As it stands, this code is type-safe, and that’s good enough for us for the time being.

The Server Side

Method Discovery

Prepare yourself (but don’t panic!): the server side of things is significantly more involved.

In order to run a server, we’re going to need to be able to handle any sort of request that can be thrown at us. That means we’ll need an arbitrary number of handlers, depending on the service in question. An obvious thought would be to generate a record we could consume that would contain handlers for every method, but there’s no obvious place to generate such a thing. Recall: proto-lens can’t, since such a type would be backend-specific, and so our only other strategy down this path would be Template Haskell. Yuck.

Instead, recall that we have an instance of HasMethod for every method on Service s – maybe we could exploit that information somehow? Unfortunately, without Template Haskell, there’s no way to discover typeclass instances.

But that doesn’t mean we’re stumped. Remember that we control the code generation, and so if the representation we have isn’t powerful enough, we can change it. And indeed, the representation we have isn’t quite enough. We can go from a HasMethod s m to its Service s, but not the other way. So let’s change that.

We change the Service class slightly:

class Service s where
  type ServiceName    s :: Symbol
  type ServiceMethods s :: [Symbol]

If we ensure that the ServiceMethods s type family always contains an element for every instance of HasService, we’ll be able to use that info to discover our instances. For example, our previous MyService will now get generated thusly:

data MyService = MyService

instance Service MyService where
  type ServiceName    MyService = "myService"
  type ServiceMethods MyService = '["biDiStreaming"]

instance HasMethod MyService "biDiStreaming" where
  type MethodInput       MyService "biDiStreaming" = Request
  type MethodOutput      MyService "biDiStreaming" = Response
  type IsClientStreaming MyService "biDiStreaming" = 'True
  type IsServerStreaming MyService "biDiStreaming" = 'True

and we would likewise add the m for any other HasMethod MyService m instances if they existed.

This seems like we can now use ServiceMethods s to get a list of methods, and then somehow type-level map over them to get the HasMethod s m constraints we want.

And we almost can, except that we haven’t told the type-system that ServiceMethods s relates to HasService s m instances in this way. We can add a superclass constraint to Service to do this:

class HasAllMethods s (ServiceMethods s) => Service s where
  -- as before

But was is this HasAllMethods thing? It’s a specialized type-level map which turns our list of methods into a bunch of constraints proving we have HasMethod s m for every m in that promoted list.

class HasAllMethods s (xs :: [Symbol])

instance HasAllMethods s '[]
instance (HasMethod s x, HasAllMethods s xs) => HasAllMethods s (x ': xs)

We can think of xs here as the list of constraints we want. Obviously if we don’t want any constraints (the '[] case), we trivially have all of them. The other case is induction: if we have a non-empty list of constraints we’re looking for, that’s the same as looking for the tail of the list, and having the constraint for the head of it.

Read through these instances a few times; make sure you understand the approach before continuing, because we’re going to keep using this technique in scarier and scarier ways.

With this HasAllMethods superclass constraint, we can now convince ourselves (and, more importantly, GHC), that we can go from a Service s constraint to all of its HasMethod s m constraints. Cool!

Typing the Server

We return to thinking about how to actually run a server. As we’ve discussed, such a function will need to be able to handle every possible method, and, unfortunately, we can’t pack them into a convenient data structure.

Our actual implementation of such a thing might take a list of handlers. But recall that each handler has different input and output types, as well as different shapes depending on which bits of it are streaming. We can make this approach work by existentializing away all of the details.

While it works as far as the actual implementation of the underlying gRPC goes, we’re left with a great sense of uneasiness. We have no guarantees that we’ve provided a handler for every method, and the very nature of existentialization means we have absolutely no guarantees that any of these things are the right ype.

Our only recourse is to somehow use our Service s constraint to put a prettier facade in front of this ugly-if-necessary implementation detail.

The actual interface we’ll eventually provide will, for example, for a service with two methods, look like this:

runServer :: HandlerForMethod1 -> HandlerForMethod2 -> IO ()

Of course, we can’t know a priori how many methods there will be (or what type their handlers should have, for that matter). We’ll somehow need to extract this information from Service s – which is why we previously spent so much effort on making the methods discoverable.

The technique we’ll use is the same one you’ll find yourself using again and again when you’re programming at the type-level. We’ll make a typeclass with an associated type family, and then provide a base case and an induction case.

class HasServer s (xs :: [Symbol]) where
  type ServerType s xs :: *

We need to make the methods xs explicit as parameters in the typeclass, so that we can reduce them. The base case is simple – a server with no more handlers is just an IO action:

instance HasServer s '[] where
  type ServerType s '[] = IO ()

The induction case, however, is much more interesting:

instance ( HasMethod s x
         , HasMethodHandler s x
         , HasServer s xs
         ) => HasServer s (x ': xs) where
  type ServerType s (x ': xs) = MethodHandler s x -> ServerType s xs

The idea is that as we pull methods x off our list of methods to handle, we build a function type that takes a value of the correct type to handle method x, which will take another method off the list until we’re out of methods to handle. This is exactly a type-level fold over a list.

The only remaining question is “what is this MethodHandler thing?” It’s going to have to be a type family that will give us back the correct type for the handler under consideration. Such a type will need to dispatch on the streaming variety as well as the request and response, so we’ll define it as follows, and go back and fix HasServer later.

class HasMethodHandler input output cs ss where
  type MethodHandler input output cs ss :: *

cs and ss refer to whether we’re looking for client-streaming and/or server-streaming types, respectively.

Such a thing could be a type family, but isn’t because we’ll need its class-ness later in order to actually provide an implementation of all of this stuff. We provide the following instances:

-- non-streaming
instance HasMethodHandler input output 'False 'False where
  type MethodHandler input output 'False 'False =
    input -> IO output

-- server-streaming
instance HasMethodHandler input output 'False 'False where
  type MethodHandler input output 'False 'True =
    input -> (output -> IO ()) -> IO ()

-- etc for client and bidi streaming

With MethodHandler now powerful enough to give us the types we want for handlers, we can go back and fix HasServer so it will compile again:

instance ( HasMethod s x
         , HasMethodHandler (MethodInput       s x)
                            (MethodOutput      s x)
                            (IsClientStreaming s x)
                            (IsServerStreaming s x)
         , HasServer s xs
         ) => HasServer s (x ': xs) where
  type ServerType s (x ': xs)
      = MethodHandler (MethodInput       s x)
                      (MethodOutput      s x)
                      (IsClientStreaming s x)
                      (IsServerStreaming s x)
     -> ServerType s xs

It’s not pretty, but it works! We can convince ourselves of this by asking ghci:

ghci> :kind! ServerType MyService (ServiceMethods MyService)

(Request -> (Response -> IO ()) -> IO ()) -> IO () :: *

and, if we had other methods defined for MyService, they’d show up here with the correct handler type, in the order they were listed in ServiceMethods MyService.

Implementing the Server

Our ServerType family now expands to a function type which takes a handler value (of the correct type) for every method on our service. That turns out to be more than half the battle – all we need to do now is to provide a value of this type.

The generation of such a value is going to need to proceed in perfect lockstep with the generation of its type, so we add to the definition of HasServer:

class HasServer s (xs :: [Symbol]) where
  type ServerType s xs :: *
  runServerImpl :: [AnyHandler] -> ServerType s xs

What is this [AnyHandler] thing, you might ask. It’s an explicit accumulator for existentialized handlers we’ve collected during the fold over xs. It’ll make sense when we look at the induction case.

For now, however, the base case is trivial as always:

instance HasServer s '[] where
  type ServerType s '[] = IO ()
  runServerImpl handlers = runGRPCServer handlers

where runGRPCServer is the underlying server provided by Awake’s library.

We move to the induction case:

instance ( HasMethod s x
         , HasMethodHandler (MethodInput       s x)
                            (MethodOutput      s x)
                            (IsClientStreaming s x)
                            (IsServerStreaming s x)
         , HasServer s xs
         ) => HasServer s (x ': xs) where
  type ServerType s (x ': xs)
      = MethodHandler (MethodInput       s x)
                      (MethodOutput      s x)
                      (IsClientStreaming s x)
                      (IsServerStreaming s x)
     -> ServerType s xs
  runServerImpl handlers f = runServerImpl (existentialize f : handlers)

where existentialize is a new class method we add to HasMethodHandler We will elide it here because it is just a function MethodHandler i o cs mm -> AnyHandler and is not particularly interesting if you’re familiar with existentialization.

It’s evident here what I meant by handlers being an explicit accumulator – our recursion adds the parameters it receives into this list so that it can pass them eventually to the base case.

There’s a problem here, however. Reading through this implementation of runServerImpl, you and I both know what the right-hand-side means, unfortunately GHC isn’t as clever as we are. If you try to compile it right now, GHC will complain about the non-injectivity of HasServer as implied by the call to runServerImpl (and also about HasMethodHandler and existentialize, but for the exact same reason.)

The problem is that there’s nothing constraining the type variables s and xs on runServerImpl. I always find this error confusing (and I suspect everyone does), because in my mind it’s perfectly clear from the HasServer s xs in the instance constraint. However, because SeverType is a type family without any injectivity declarations, it means we can’t learn s and xs from ServerType s xs.

Let’s see why. For a very simple example, let’s look at the following type family:

type family NotInjective a where
  NotInjective Int  = ()
  NotInjective Bool = ()

Here we have NotInjective Int ~ () and NotInjective Bool ~ (), which means even if we know NotInjective a ~ () it doesn’t mean that we know what a is – it could be either Int or Bool.

This is the exact problem we have with runServerImpl: even though we know what type runServerImpl has (it must be ServerType s xs, so that the type on the left-hand of the equality is the same as on the right), that doesn’t mean we know what s and xs are! The solution is to explicitly tell GHC via a type signature or type application:

instance ( HasMethod s x
         , HasMethodHandler (MethodInput       s x)
                            (MethodOutput      s x)
                            (IsClientStreaming s x)
                            (IsServerStreaming s x)
         , HasServer s xs
         ) => HasServer s (x ': xs) where
  type ServerType s (x ': xs)
      = MethodHandler (MethodInput       s x)
                      (MethodOutput      s x)
                      (IsClientStreaming s x)
                      (IsServerStreaming s x)
     -> ServerType s xs
  runServerImpl handlers f = runServerImpl @s @xs (existentialize f : handlers)

(For those of you playing along at home, you’ll need to type-apply the monstrous MethodInput and friends to the existentialize as well.)

And finally, we’re done! We can slap a prettier interface in front of this runServerImpl to fill in some of the implementation details for us:

    :: forall s
     . ( Service s
       , HasServer s (ServiceMethods s)
    => s
    -> ServerType s (ServiceMethods s)
runServer _ = runServerImpl @s @(ServiceMethods s) []

Sweet and typesafe! Yes!

Client-side Usability

Sweet and typesafe all of this might be, but the user-friendliness on the client-side leaves a lot to be desired. As promised, we’ll address that now.

Removing Proxies

Recall that the runNonStreamingClient function and its friends require a Proxy m parameter in order to specify the method you want to call. However, m has kind Symbol, and thankfully we have some new extensions in GHC for turning Symbols into values.

We can define a new type, isomorphic to Proxy, but which packs the fact that it is a KnownSymbol (something we can turn into a String at runtime):

data WrappedMethod (sym :: Symbol) where
  WrappedMethod :: KnownSymbol sym => WrappedMethod sym

We change our run*Client friends to take this WrappedMethod m instead of the Proxy m they used to:

    :: ( HasMethod s m
       , IsClientStreaming s m ~ 'False
       , IsServerStreaming s m ~ 'False
    => s
    -> WrappedMethod m
    -> MethodInput s m
    -> IO (Either GRPCError (MethodOutput s m))

and, with this change in place, we’re ready for the magic syntax I promised earlier.

import GHC.OverloadedLabel

instance ( KnownSymbol sym
         , sym ~ sym'
         ) => IsLabel sym (WrappedMethod sym') where
  fromLabel _ = WrappedMethod

This sym ~ sym' thing is known as the constraint trick for instances, and is necessary here to convince GHC that this can be the only possible instance of IsLabel that will give you back WrappedMethods.

Now turning on the {-# LANGUAGE OverloadedLabels #-} pragma, we’ve changed the syntax to call these client functions from the ugly:

runBiDiStreamingClient MyService (Proxy @"biDiStreaming")

into the much nicer:

runBiDiStreamingClient MyService #biDiStreaming

Better “Wrong Streaming Variety” Errors

The next step in our journey to delightful usability is remembering that the users of our library are only human, and at some point they are going to call the wrong run*Client function on their method with a different variety of streaming semantics.

At the moment, the errors they’re going to get when they try that will be a few stanza long, the most informative of which will be something along the lines of unable to match 'False with 'True. Yes, it’s technically correct, but it’s entirely useless.

Instead, we can use the TypeError machinery from GHC.TypeLits to make these error messages actually helpful to our users. If you aren’t familiar with it, if GHC ever encounters a TypeError constraint it will die with a error message of your choosing.

We will introduce the following type family:

type family RunNonStreamingClient (cs :: Bool) (ss :: Bool) :: Constraint where
  RunNonStreamingClient 'False 'False = ()
  RunNonStreamingClient 'False 'True = TypeError
      ( Text "Called 'runNonStreamingClient' on a server-streaming method."
   :$$: Text "Perhaps you meant 'runServerStreamingClient'."
  RunNonStreamingClient 'True 'False = TypeError
      ( Text "Called 'runNonStreamingClient' on a client-streaming method."
   :$$: Text "Perhaps you meant 'runClientStreamingClient'."
  RunNonStreamingClient 'True 'True = TypeError
      ( Text "Called 'runNonStreamingClient' on a bidi-streaming method."
   :$$: Text "Perhaps you meant 'runBiDiStreamingClient'."

The :$$: type operator stacks message vertically, while :<>: stacks it horizontally.

We can change the constraints on runNonStreamingClient:

    :: ( HasMethod s m
       , RunNonStreamingClient (IsClientStreaming s m)
                               (IsServerStreaming s m)
    => s
    -> WrappedMethod m
    -> MethodInput s m
    -> IO (Either GRPCError (MethodOutput s m))

and similarly for our other client functions. Reduction of the resulting boilerplate is left as an exercise to the reader.

With all of this work out of the way, we can test it:

runNonStreamingClient MyService #biDiStreaming
Main.hs:45:13: error:
    • Called 'runNonStreamingClient' on a bidi-streaming method.
      Perhaps you meant 'runBiDiStreamingClient'.
    • In the expression: runNonStreamingClient MyService #bidi


Better “Wrong Method” Errors

The other class of errors we expect our users to make is to attempt to call a method that doesn’t exist – either because they made a typo, or are forgetful of which methods exist on the service in question.

As it stands, users are likely to get about six stanzas of error messages, from No instance for (HasMethod s m) to Ambiguous type variable 'm0', and other terrible things that leak our implementation details. Our first thought might be to somehow emit a TypeError constraint if we don’t have a HasMethod s m instance, but I’m not convinced such a thing is possible.

But luckily, we can actually do better than any error messages we could produce in that way. Since our service is driven by a value (in our example, the data constructor MyService), by the time things go wrong we do have a Service s instance in scope. Which means we can look up our ServiceMethods s and given some helpful suggestions about what the user probably meant.

The first step is to implement a ListContains type family so we can determine if the method we’re looking for is actually a real method.

type family ListContains (n :: k) (hs :: [k]) :: Bool where
  ListContains n '[]       = 'False
  ListContains n (n ': hs) = 'True
  ListContains n (x ': hs) = ListContains n hs

In the base case, we have no list to look through, so our needle is trivially not in the haystack. If the head of the list is the thing we’re looking for, then it must be in the list. Otherwise, take off the head of the list and continue looking. Simple really, right?

We can now use this thing to generate an error message in the case that the method we’re looking for is not in our list of methods:

type family RequireHasMethod s (m :: Symbol) (found :: Bool) :: Constraint where
  RequireHasMethod s m 'False = TypeError
      ( Text "No method "
   :<>: ShowType m
   :<>: Text " available for service '"
   :<>: ShowType s
   :<>: Text "'."
   :$$: Text "Available methods are: "
   :<>: ShowType (ServiceMethods s)
  RequireHasMethod s m 'True = ()

If found ~ 'False, then the method m we’re looking for is not part of the service s. We produce a nice error message informing the user about this (using ShowType to expand the type variables).

We will provide a type alias to perform this lookup:

type HasMethod' s m =
  ( RequireHasMethod s m (ListContains m (ServiceMethods s)
  , HasMethod s m

Our new HasMethod' s m has the same shape as HasMethod, but will expand to our custom type error if we’re missing the method under scrutiny.

Replacing all of our old HasMethod constraints with HasMethod' works fantastically:

Main.hs:54:15: error:
    • No method "missing" available for service 'MyService'.
      Available methods are: '["biDiStreaming"]

Damn near perfect! That list of methods is kind of ugly, though, so we can write a quick pretty printer for showing promoted lists:

type family ShowList (ls :: [k]) :: ErrorMessage where
  ShowList '[]  = Text ""
  ShowList '[x] = ShowType x
  ShowList (x ': xs) = ShowType x :<>: Text ", " :<>: ShowList xs

Replacing our final ShowType with ShowList in RequireHasMethod now gives us error messages of the following:

Main.hs:54:15: error:
    • No method "missing" available for service 'MyService'.
      Available methods are: "biDiStreaming"

Absolutely gorgeous.


This is where we stop. We’ve used type-level metadata to generate client- and server-side bindings to an underlying library. Everything we’ve made is entirely typesafe, and provides gorgeous, helpful error messages if the user does anything wrong. We’ve found a practical use for many of these seemingly-obscure type-level features, and learned a few things in the process.

In the words of my coworker Renzo Carbonara1:

“It is up to us, as people who understand a problem at hand, to try and teach the type system as much as we can about that problem. And when we don’t understand the problem, talking to the type system about it will help us understand. Remember, the type system is not magic, it is a logical reasoning tool.”

This resounds so strongly in my soul, and maybe it will in yours too. If so, I encourage you to go forth and find uses for these techniques to improve the experience and safety of your own libraries.

  1. Whose article “Opaleye’s sugar on top” was a strong inspiration on me, and subsequently on this post.


Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: The choice

Yesterday we mocked Mills here on this righteous blog. Today we commiserate. We feel your pain, moisters. You get all schooled up and brimming with expectations, then what happens? Yep. Reality. It blows.

Never before have so many people between 20 and 35 lived with their parents at home. Estimates vary, but in Canada the consensus is between 30% and 42%. The impact on parental finances is huge, but so must be the psychological toll of being an adult and yet a child. When measured against their parents, who generally fled the nest in their early 20s, this is a generation in which financial and personal maturity is being pushed back as never before.


A Credit Suisse report says Millennials face a perfect storm preventing them from independence. That includes crappy entry-level jobs with substandard wages, inflated real estate costs, careers without pensions and few benefits, withering, decaying Boomers who hold the good positions and refuse to retire, soaring rents (because of soaring housing) and an insane conservativism born of suspicion, mistrust and cynicism (and too much education). Wow. End of days stuff. If you think the next two decades of your life will be ashes, why leave Mom’s basement?

“With the baby boomers occupying most of the top jobs and much of the housing, millennials are doing less well than their parents at the same age, especially in relation to income, home ownership and other dimensions of wellbeing,” says the bank. A big issue is that people in the twenties and thirties are entering family-formation years and a giant, prolonged rutting season in which nesting, birthing and parenting are preoccupations. But because we’re now in a gig economy with stupid house prices, rising rates, tougher borrowing rules and endless volatility, it all ends in stress.

Of course the Mills are not alone when it comes to be fritzed. Another survey (they’re endless) says 70% of people stress over money to the extent four in ten can’t sleep. Half of all marriages now end, with finances cited as the reason.

At the root of much of this is, of course, real estate. Canadians obsess about it, spend huge sums they don’t have to get it, then face decades of payments for mortgages, property tax, insurance, maintenance and utilities. If they end up making a capital gain, great. But then most just trade up to a bigger house, subsume all their profit within equity, and double down on a new mortgage. Meanwhile the poor moisters are stuck on the sidelines after an historic housing romp, unable to buy much other than a kennel-sized concrete box for $800 a foot.

So here’s the worse news, to accompany the bad news above. The housing market is destined to be negatively impacted by higher interest rates and tighter lending rules plus overbuilding. The areas whacked the most will be exactly where the Mills migrate – urban 416 and YVR. The kind of real estate most devastated will be just what the kids are now flocking to (because it’s what they can afford) – condos. It’s entirely anyone buying such a place now with a 20% down payment will lose all equity within the next two years. Your mortgage will equal the entire worth of the property, or more. The loss will be devastating if you deployed all your savings.

Meanwhile rents are high and vacancy rates low in Toronto, and especially Vancouver. Both leasing and owning are massively more expensive than they were for your parents and, of course, all the big money’s already been made in real estate. If you do buy a house from a Boomer, you’ll be funding his retirement and participating in the grandest transfer of wealth ever from one generation to the previous one.

So, be smart about it.

Don’t buy. It’s a wealth trap. An ‘affordable’ urban condo means forever-high monthly fees, no outside space, a limited universe of potential buyers, shoddy finishing and weird people living above and below you. You cannot control the building, the living environment nor shield yourself from special assessments when the parking garage needs a repair or the windows fog and must be replaced. The value of your unit is tied to that of every similar one, and with each new high-rise built, existing condo buildings become a little less desirable. When you can live in the same space as a renter for half the cost and none of the risk, why wouldn’t you?

Better still, leave town. Seriously.

Take that $400,000 and see what it buys in Halifax, Quebec City, London, Lloydminster or even Montreal and Ottawa. Sure, finding work may be a little more challenging, but apparently people have actual jobs in those cities. They don’t spent 110% of their incomes on accommodation, which means they can stuff their TFSAs, or even afford to have children. They get houses with yards, driveways, backyards and minivans. They pay off their debts.

The premium for living now in 416 or 604 is extreme, unrelenting, draining and destructive. It’s about to get worse. If you try, you may never recover. Get out and get a life. Or, stay with Mom. If that’s a tough choice, you’re already pooched.


Penny Arcade: News Post: It&#8217;s Got To Be Here Somewhere

Tycho: Gabe had grabbed some kind of a Starter Pack for a game called Skyforge, he didn’t know much about it but something about it had made him curious.  He’s been using the Xbox One X with some regularity, maybe it’s that.  If you have a new piece of hardware you’re always trying to test it out. I opened the website for the game and saw a unique individual at the top.  I don’t know exactly what’s going on with this guy, let’s see if we can figure it out.  His stomach is… a mouth.  I think that’s the best way of putting…

Better Embedded System SW: Highly Autonomous Vehicle Validation

Here are the slides from my TechAD talk today.

Highly Autonomous Vehicle Validation from Philip Koopman

Highly Autonomous Vehicle Validation: it's more than just road testing!
- Why a billion miles of testing might not be enough to ensure self-driving car safety.
- Why it's important to distinguish testing for requirements validation vs. testing for implementation validation.
- Why machine learning is the hard part of mapping autonomy validation to ISO 26262

Daniel Lemire's blog: Science and Technology links (November 17th, 2017)

Josiah Zayner, a biochemist who once worked for NASA, became the first person known to have edited his own genes (…) During a lecture about human genetic engineering that was streamed live on Facebook, Zayner whipped out a vial and a syringe, then injected himself. Now, following in his footsteps, other biohackers are getting ready to take the plunge and tinker with their own genes. Zayner’s experiment was intended to boost his strength by removing the gene for myostatin, which regulates muscle growth. A similar experiment in 2015 showed that this works in beagles whose genomes were edited at the embryo stage. He injected himself (…) to remove the gene. (Biohackers are using CRISPR on their DNA and we can’t stop it)

Human beings definitively do not have the largest brains:

We found that the long-finned pilot whale neocortex has approximately 37.2 × 109 neurons, which is almost twice as many as humans, and 127 × 109 glial cells. Thus, the absolute number of neurons in the human neocortex is not correlated with the superior cognitive abilities of humans (at least compared to cetaceans) as has previously been hypothesized.

We can make old mice smarter by tweaking just one gene according to an article published in Nature:

This study demonstrates for we believe the first time in vivo that 6 months after a single injection of s-KL into the central nervous system, long-lasting and quantifiable enhancement of learning and memory capabilities are found. More importantly, cognitive improvement is also observable in 18-month-old mice treated once, at 12 months of age.

I stumbled on an older post (2015) by Marcel Weiher about his views on where software is headed:

(…) for most performance critical tasks, predictability is more important than average speed (…) Alas the idea that writing high-level code without any concessions to performance (often justified by misinterpreting or simply just misquoting Knuth) and then letting a sufficiently smart compiler fix it lives on. I don’t think this approach to performance is viable, more predictability is needed and a language with a hybrid nature and the ability for the programmer to specify behavior-preserving transformations that alter the performance characteristics of code is probably the way to go for high-performance, high-productivity systems.

He is arguing that engineers have made it hard to reason about performance, and to design software for the needed performance. When we are stuck with unacceptable performance, we are often stuck… unable to know how to fix the problems.

We found a single gene that makes you live seven years older on average than your peers.

Exercise increases the size of your brain.

Daniel Lemire's blog: Fast exact integer divisions using floating-point operations (ARM edition)

In my latest post, I explained how you could accelerate 32-bit integer divisions by transforming them into 64-bit floating-point divisions. Indeed, 64-bit floating-point numbers can represent accurately all 32-bit integers on most processors.

It is a strange result: Intel processors seem to do a lot better with floating-point divisions than integer divisions.

Recall the numbers that I got for the throughput of division operations:

64-bit integer division 25 cycles
32-bit integer division (compile-time constant) 2+ cycles
32-bit integer division 8 cycles
32-bit integer division via 64-bit float 4 cycles

I decided to run the same test on a 64-bit ARM processor (AMD A1100):

64-bit integer division 7 ns
32-bit integer division (compile-time constant) 2 ns
32-bit integer division 6 ns
32-bit integer division via 64-bit float 18 ns

These numbers are rough, my benchmark is naive (see code). Still, on this particular ARM processor, 64-bit floating-point divisions are not faster (in throughput) than 32-bit integer divisions. So ARM processors differ from Intel x64 processors quite a bit in this respect.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - LIES

Click here to go see the bonus panel!

I am, however, totally ready for a new makeup trend for sharing historical fun-facts.

New comic!
Today's News:

Colossal: New Finnish Landscapes Captured Within Jars by Christoffer Relander

Photographer Christoffer Relander (previously) uses double exposure photography to “capture” wooded Finnish landscapes inside of glass jars. These images give a peek into the photographer’s past, while also metaphorically preserving the memories he formed as a childhood growing up in the south of Finland.

“Reality can be beautiful, but the surreal often absorbs me,” said Relander in an artist statement on his website. “Photography to me is a way to express and stimulate my imagination. Nature is simply the world. With alternative and experimental camera techniques I am able to create artworks that otherwise only would be possible through painting or digital manipulation in an external software.”

The new series is a follow-up to his black and white project Jarred & Displaced which was recently exhibited at the Finnish Cultural Institute in Madrid. You can view more of Relander’s wooded images on his Instagram. (via PetaPixel)

Penny Arcade: Comic: It&#8217;s Got To Be Here Somewhere

New Comic: It’s Got To Be Here Somewhere

Michael Geist: Closed by Default: Why is Prime Minister Trudeau Using Restrictive Terms for Flickr Image Use?

Yesterday’s post on the Canada, the TPP and intellectual property raised a concern unrelated to the content of the piece. Since updating my site several years ago, I use a Creative Commons licensed or public domain image for virtually every post, celebrating the remarkable creativity of people and organizations from around the world who make their work freely available for anyone to use. In searching for an updated image on the TPP, I encountered a problem that has arisen with increased frequency. Several governments posted relevant images from the meetings in Vietnam and the Philippines, but the Canadian images featured restrictive terms and conditions in the form of an all rights reserved approach.

For example, there are two pictures from the same meeting downloaded from Flickr accompanying this post. The one on the left is from the President of Mexico’s Flickr page and is subject to a Creative Commons licence that permits non-commercial re-use. The picture on the right, taken from Justin Trudeau’s Flickr page, is all rights reserved. While I believe that I can rely on fair dealing and the Copyright Act’s non-commercial user generated content provision to use the picture, the restrictive licensing approach, which has become pervasive within the federal government on Flickr, is out-of-step with the standard of governments around the world and inconsistent with the “open by default” commitment.

The Prime Minister’s web page explicitly states that the works on Flickr are subject to crown copyright with all rights reserved:

Images and videos available through the Prime Minister’s Twitter, Flickr, YouTube and the Prime Minister’s Volunteer Awards Facebook accounts are subject to a Canadian Crown Copyright with all rights reserved, unless otherwise specified. We use Creative Commons Licenses to enable the sharing and use of images and videos in accordance with the terms set out in the specified Creative Commons license.

The problem is that the images on Flickr do not use Creative Commons licences but rather state that they are all rights reserved.

An open licensing approach that permits at least non-commercial use is commonly used by leaders, parliaments, and government departments around the world, with most relying on either a Creative Commons licence or immediately placing the work in the public domain. Examples include the UK Prime Minister, the Prime Minister of India, the Presidents of France, Mexico, and the United States, the European Parliament, the Government of South Korea, Government of Guatemala, the National Assembly for Wales, and Australia’s Department of Foreign Affairs to name just a few. Many of these governments provide public domain licences that allow for use of any kind. In Canada, many provincial government also use more flexible licensing options including the Premier of Alberta, Province of British Columbia, Province of PEI, and Province of Newfoundland and Labrador.

The all rights reserved approach means that foreign and provincial governments (along with international organizations) are now often the primary source for openly licensed pictures of Canadian ministers. Want a picture of Trudeau with UK Prime Minister Theresa May? There are many with open licences from May, but similar pictures on Flickr from Trudeau are all rights reserved. Want a picture of Trudeau at the recent ASEAN or APEC meetings in Vietnam and the Philippines? The White House has a public domain one, but Trudeau’s pictures are again all rights reserved.

The situation is similar for pictures of most cabinet ministers. Pictures of ISED Minister Navdeep Bains from his department’s Flickr page are all rights reserved, but the Province of B.C. has an a Creative Commons licensed one. Finance Minister Bill Morneau’s photos are all rights reserved, but there are Creative Commons licensed images from the IMF and OECD. Canadian Heritage uses all rights reserved for its Flickr pictures (which are oddly focused on British royalty), but B.C. again offers a Creative Commons licensed one for Minister Melanie Joly. Global Affairs no longer seems to post political-related photos, but Foreign Affairs Chrystia Freeland has dozens of photos from other governments, leaders, ministers, and international organizations.

In fact, some Creative Commons licensed images posted to Flickr under the previous Conservative government have been removed altogether. I relied on a Creative Commons licensed images of former Prime Minister Harper from his own Flickr page in a 2015 post but it has since been made private. The same is true for an image of former International Trade Minister Ed Fast in a 2015 post on the TPP.

There are certainly alternatives to relying on Creative Commons licensed or public domain images for websites, educational materials or other uses. Many uses of a single image will qualify as fair dealing, provided they are used with one of the enumerated purposes under the law. Similarly, an original non-commercial work that incorporates other copyrighted works may qualify under the non-commercial user generated content exception. Further, these images may be posted elsewhere, perhaps with less restrictive terms.

Yet today Flickr is the largest online image platform for openly licensed images in the world with 381 million Creative Commons or public domain licensed images. With search functionality that makes it easy to work through millions of images, it is a remarkably useful tool for finding and using openly licensed works without the need for further copyright analysis or permissions. The government should be actively encouraging the use of its images, for which the public has paid through their tax dollars. Indeed, a government committed to open-by-default should not require people to engage in a copyright analysis to determine whether they can use an image of the Prime Minister or government officials. Absent the much-needed elimination of crown copyright, the government should immediately shift to Creative Commons or licences for its images on Flickr.


The post Closed by Default: Why is Prime Minister Trudeau Using Restrictive Terms for Flickr Image Use? appeared first on Michael Geist. Comic for 2017.11.17

New Cyanide and Happiness Comic

Ideas from CBC Radio (Highlights): Making the Team with 2017 Friesen Prize winner Dr. Alan Bernstein

2017 Friesen Prize winner Dr. Alan Bernstein talks with Paul Kennedy about his contributions to Canadian Medicine and advanced research. He continues to encourage and develop the spirit of teamwork that has characterized his entire career.

Jesse Moynihan: The World Part 1

Lots of notes on this one. Papus is not usually very reliable, but it’s still interesting to get his input. (Crowley’s version) (Tarot de Paris)  

IEEE Job Site RSS jobs: Canada Research Chair - Tier 1

Waterloo, Ontario, Canada University of Waterloo Thu, 16 Nov 2017 16:27:17 -0800

Perlsphere: President of UN General Assembly Thanks End Point

The President of UN General Assembly, Peter Thomson, thanked End Point for supporting the Ocean Conference, which was held at the United Nations Headquarters this past summer to bring attention and action to saving the world’s oceans.

End Point’s Liquid Galaxy helped bring to life “Reconnecting Humanity to the Sea,” an exhibition meant to showcase the beauty of the ocean and the challenges it faces today. End Point created the presentation’s content and showcased it at the conference.

“We were very pleased to see End Point’s Liquid Galaxy used to promote a hopeful future for the world’s oceans. It’s very satisfying to see our technology used to make an important story that much more compelling.”

Rick Peltzman
End Point, CEO

This UN press release explains more about the conference and its results:
“UN Ocean Conference wraps up with actions to restore ocean health, protect marine life”

See the letter:

Quiet Earth: RAMPAGE Trailer Destroys Everything

The first trailer for Rampage is upon us! Based on the famous arcade game, Rampage stars Dwayne "The Rock" Johnson seen in the first trailer below.

Primatologist Davis Okoye (Johnson), a man who keeps people at a distance, shares an unshakable bond with George, the extraordinarily intelligent, silverback gorilla who has been in his care since birth. But a rogue genetic experiment gone awry transforms this gentle ape into a raging monster.

To make matters worse, it’s soon discovered there are other similarly altered alpha predators. As these newly created monsters tear across North America, destroying everything in their path, Okoye teams with a discredited genetic engineer to secure an antidote, fighting his way through an ever-changing battlefiel [Continued ...]

Disquiet: Disquiet Junto Project 0307: Black and White and Punk All Over

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is 11:59pm (that is, just before midnight) wherever you are on Monday, November 20, 2017. This project was posted in the morning, California time, on Thursday, November 16, 2017.

Tracks will be added to the playlist for the duration of the project.

These are the instructions that went out to the group’s email list (at

Disquiet Junto Project 0307: Black and White and Punk All Over
Pay tribute to the Sex Pistols on the 40th anniversary of Never Mind the Bollocks.

It’s the 40th anniversary of the Sex Pistols’ 1977 album Never Mind the Bollocks, a punk origin point if ever there were one. Of course, punk can be traced both forward and further back, not just back to the prior urtexts of the Ramones, but further still to Dada, to the Situationist International. To explore punk continuity, and borrowing from Greil Marcus’ Lipstick Traces, we’re going to use a classic Situationist film, Hurlements en faveur de Sade, as a starting point. The whole thing is streamable here:

Note that Hurlements has no visual component outside of a white and black screen: when the screen is white, people speak; when it is black, there is silence.

Major thanks to Zero Meaning, Toaster, and Jason Wehmhoener for leading the way in developing this prompt.

Step 1: Think about what “punk” means to you. The word has mutated over time, and can embody something as specific as instrumentation and recording, and as broad as a spirit, an approach, a philosophy. It’s both a source of inspiration and a nexus of conflicted debate. Keep your sense of punk in mind as you follow the remaining steps.

Step 2: Choose a length for your project. Somewhere under 5 minutes should do fine.

Step 3: Using chance methods, divide that length into segments. A minimum of one segment per minute should keep things interesting, though certainly feel free to use more or fewer segments.

Step 4: Label your first segment “white” and your second “black” and your third “white” and, per Hurlement, continue to alternate labels until all the segments are labeled.

Step 5: Choose two sets of instruments (or, more broadly defined, sound sources). If you don’t have access to more than one instrument, then vary the playing approach (plucking vs. bowing, chords vs. single notes, different patches on a single synth, etc.). Label one set “white” and the other “black.” Do not use any of the same instruments/methods in the “white” sections that you use in the “black” sections. The amount of cohesion between the segments is up to you, but keep the instrumentation the same in each.

Step 6: Now, jam econo. Record an original piece of music in the simplest way possible, using only white instruments in the white segments and black instruments in the black segments. Consider something more lo-fi than you might normally use.

Five More Important Steps When Your Track Is Done:

Step 1: If your hosting platform allows for tags, be sure to include the project tag “disquiet0307” (no spaces) in the name of your track. If you’re posting on SoundCloud in particular, this is essential to my locating the tracks and creating a playlist of them.

Step 2: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 3: In the following discussion thread at please consider posting your track:

Step 4: Annotate your track with a brief explanation of your approach and process.

Step 5: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Deadline: This project’s deadline is 11:59pm (that is, just before midnight) wherever you are on Monday, November 20, 2017. This project was posted in the morning, California time, on Thursday, November 16, 2017.

Length: Somewhere under 5 minutes seems about right, but it’s up to you.

Title/Tag: When posting your track, please include “disquiet0307” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: It is preferable that your track is set as downloadable, and that it allows for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution).

Linking: When posting the track online, please be sure to include this information:

More on this 307th weekly Disquiet Junto project (“Black and White and Punk All Over: Pay tribute to the Sex Pistols on the 40th anniversary of Never Mind the Bollocks.) at:

More on the Disquiet Junto at:

Subscribe to project announcements here:

Project discussion takes place on

There’s also on a Junto Slack. Send your email address to for Slack inclusion.

Photo associated with this project is a detail of an image by Koen Suyk used via Wikipedia thanks to a Creative Commons license:

Michael Geist: Bursting the IP Trade Bubble: Canada’s Position on IP Rules Takes Shape With Suspended TPP Provisions

In the months following the conclusion of the Trans Pacific Partnership, critics pointed to many specific problems in the text with respect to intellectual property, culture, privacy, and dispute resolution. TPP defenders consistently dismissed those concerns, yet last week’s successful Canadian demand to suspend many of the most problematic IP provisions (along with holding out for reforms to the cultural exemption) confirms that the government has recognized the validity of the criticisms. The government may yet cave to U.S. pressure in the NAFTA renegotiation, but it has established a clear position on culture and IP that better reflects the national interest.

For example, as part of my 50 day Trouble with the TPP series, I pointed to a surprising shift in Canadian trade policy with respect to culture. While Canada had long insisted that the cultural industries receive a full exemption, the Conservative government had agreed to important exceptions to that general rule in the TPP.  Buried in Annex II of non-conforming measures were two exceptions to the cultural exception that could be used to block efforts to create mandated Cancon contributions for foreign providers or regulatory restrictions on foreign audio-visual content. Leaving aside whether these would be “good” policy measures, I argued that they did not belong in a trade agreement. While some disagreed (I responded here), the government’s insistence that it will not agree to the CPTPP without addressing the cultural issue validates the concerns, suggesting that policy makers recognize what is obvious from the wording of the text, namely that the TPP would restrict Canadian cultural policy.

The same is true for the TPP copyright and patent provisions. TPP supporters have frequently sought to downplay the significance of copyright term extension, the loss of flexibility on technological protection measures, patent term extension, and fixing the minimum standard for biologics protections. In fact, those provisions extend far beyond international treaty requirements and restrict the ability for countries to tailor their intellectual property laws consistent with those global rules. While the U.S. has been a longstanding proponent of exporting its IP laws, other countries have had strong misgivings about the approach. The Conservatives were willing to cave on these issues during the TPP negotiations, but the Liberal decision to demand suspension of those provisions – which garnered agreement from other TPP countries – demonstrates the quiet opposition to more restrictive copyright and patent rules. Far from being out-of-step with our trading partners, Canadian policy preferences are actually widely shared with many other countries.

The big question is what comes next. On the TPP11 (or CPTPP), the remaining countries have agreed to give everyone an effective veto power with respect to new entrants. Therefore, rather than being used as an incentive to entice the U.S. back into the deal, it may be difficult for the U.S. to convince all remaining countries to unanimously support an end to the suspended provisions. Even if one holds out, the provisions remain suspended.

The TPP11 outcome also confirms – yet again – that there is simply no need for excessively restrictive IP rules in modern trade agreements. The TPP11, the Canada – South Korean trade agreement, and CETA all feature robust IP chapters but do not include provisions such as mandatory copyright term extension beyond international treaty requirements. The NAFTA negotiations, however, will represent a much more difficult challenge as the U.S. is likely to re-assert TPP-style demands in that agreement. Canada may have a tougher time fending off U.S. pressure given the myriad of contentious issues – some reports suggest that IP will be an area to deal if the U.S. compromises on other issues – but its TPP position highlights that politicians and policy makers recognize that extending the term of copyright or patents and limiting future IP and cultural policy flexibility is not in Canada’s national interest.

The post Bursting the IP Trade Bubble: Canada’s Position on IP Rules Takes Shape With Suspended TPP Provisions appeared first on Michael Geist.

Daniel Lemire's blog: Fast exact integer divisions using floating-point operations

On current processors, integer division is slow. If you need to compute many quotients or remainders, you can be in trouble. You potentially need divisions when programming a circular buffer, a hash table, generating random numbers, shuffling data randomly, sampling from a set, and so forth.

There are many tricks to avoid performance penalties:

  • You can avoid dividing by an arbitrary integer and, instead, divide by a known power of two.
  • You can use a divisor that is known to your compiler at compile-time. In these cases, most optimizing compilers will “optimize away” the division using magical algorithms that precompute a fast division routine.
  • If you have a divisor that is not known at a compile time, but that you reuse often, you can make use of a library like liddivide to precompute a fast division routine.
  • You can reengineer your code to avoid needing a division in the first place, see my post A fast alternative to the modulo reduction.

But sometimes, you are really stuck and need those divisions. The divisor is not frequently reused, and you have lots of divisions to do.

If you have 64-bit integers, and you need those 64 bits, then you might be in a bit of trouble. Those long 64-bit integers have a terribly slow division on most processors, and there may not be a trivial way to avoid the price.

However, if you have a 32-bit integers, you might have a way out. Modern 64-bit processors have 64-bit floating-pointer numbers using IEEE standards. These 64-bit floating-point numbers can be used to represent exactly all integers in the interval [0,253). That means that you can safely cast your 32-bit unsigned integers as 64-bit floating-point numbers.

Furthermore, common x64 processors have fast floating-point divisions. And the division operation over floating-point numbers is certain to result in the closest number that the standard can represent. The division of an integer in [0,232) by an integer in [1,232) is sure to be in [0,232). This means that you can almost replace the 32-bit integer division by a 64-bit floating point division:

uint32_t divide(uint32_t a, uint32_t b) {
  double da = (double) a;
  double db = (double) b;
  double q = da/db;
  return (uint32_t) q;

Sadly, if you try to divide by zero, you will not get a runtime error, but rather some nonsensical result. Still, if you can be trusted to not divide by zero, this provides a fast and exact integer division routine.

How much faster is it? I wrote a small program to measure the throughput:

64-bit integer division 25 cycles
32-bit integer division (compile-time constant) 2+ cycles
32-bit integer division 8 cycles
32-bit integer division via 64-bit float 4 cycles

These numbers are rough, but we can estimate that we double the throughput.

I am not entirely sure why compilers fail to exploit this trick. Of course, they would need to handle the division by zero, but that does not seem like a significant barrier. There is also another downside to the floating-point approach: it generates many more instructions.

Regarding signed integers, they work much the same, but you need extra care. For example, most processors rely on two’s complement notation which implies that you have one negative number that cannot be represented as a positive number. Thus implementing “x / (-1)” can cause some headaches. You probably do not want to divide signed integers anyhow.

I plan to come back to the scenario where you have lots of 64-bit integer divisions with a dynamic divisor.

This result is for current Intel x64 processors, what happens on ARM processors is quite different.

CreativeApplications.Net: Variable – The signification of terms in artists’ statements

Created by Selcuk Artut, Variable is an artwork that explores the signification of terms in artists' statements. The artwork uses machine learning algorithms to thoughtfully problematise the limitations of algorithms and encourage the visitor to reflect on poststructuralism’s ontological questions.

New Humanist Blog: Capitalism: the winter 2017 New Humanist

Out now - how the system shapes the way we think.

Blog – Free Electrons: Mender: How to integrate an OTA updater

Recently, our customer Senic asked us to integrate an Over-The-Air (OTA) mechanism in their embedded Linux system, and after some discussion, they ended up chosing Mender. This article will detail an example of Mender’s integration and how to use it.

What is Mender?

Mender is an open source remote updater for embedded devices. It is composed of a client installed on the embedded device, and a management server installed on a remote server. However, the server is not mandatory as Mender can be used standalone, with updates triggered directly on the embedded device.

Image taken from Mender’s website

In order to offer a fallback in case of failure, Mender uses the double partition layout: the device will have at least 2 rootfs partitions, one active and one inactive. Mender will deploy an update on the inactive partition, so that in case of an error during the update process, it will still have the active partition intact. If the update succeeds, it will switch to the updated partition: the active partition becomes inactive and the inactive one becomes the new active. As the kernel and the device tree are stored in the /boot folder of the root filesystem, it is possible to easily update an entire system. Note that Mender needs at least 4 partitions:

  • bootloader partition
  • data persistent partition
  • rootfs + kernel active partition
  • rootfs + kernel inactive partition

It is, of course, customizable if you need more partitions.

Two reference devices are supported: the BeagleBone Black and a virtual device. In our case, the board was a Nanopi-Neo, which is based on an Allwinner H3.

Mender provides a Yocto Project layer containing all the necessary classes and recipes to make it work. The most important thing to know is that it will produce an image ready to be written to an SD card to flash empty boards. It will also produce “artifacts” (files with .mender extension) that will be used to update an existing system.

Installation and setup

In this section, we will see how to setup the Mender client and server for your project. Most of the instructions are taken from the Mender documentation that we found well detailed and really pleasant to read. We’ll simply summarize the most important steps.

Server side

The Mender server will allow you to remotely update devices. The server can be installed in two modes:

  • demo mode: Used to test a demo server. It can be nice to test it if you just want to quickly deploy a Mender solution, for testing purpose only. It includes a demo layer that simplify and configure for you a default Mender server on localhost of your workstation.
  • production mode: Used for production. We will focus on this mode as we wanted to use Mender in a production context. This mode allows to customize the server configuration: IP address, certificates, etc. Because of that, some configuration will be necessary (which is not the case in the demo mode).

In order to install the Mender server, you should first install Docker CE and Docker Compose. Have a look at the corresponding Docker instructions.


  • Download the integration repository from Mender:
  • $ git clone mender-server
  • Checkout 1.1.0 tag (latest version at the moment of the test)
  • $ cd mender-server
    $ git checkout 1.1.0 -b my-production-setup
  • Copy the template folder and update all the references to “template”
  • $ cp -a template production
    $ cd production
    $ sed -i -e 's#/template/#/production/#g' prod.yml
  • Download Docker images
  • $ ./run pull
  • Use the keygen script to create certificates for domain names (e.g. and
  • $ ../keygen
  • Some persistent storage will be needed by Mender so create a few Docker volumes:
  • $ docker volume create --name=mender-artifacts
    $ docker volume create --name=mender-deployments-db
    $ docker volume create --name=mender-useradm-db
    $ docker volume create --name=mender-inventory-db
    $ docker volume create --name=mender-deviceadm-db
    $ docker volume create --name=mender-deviceauth-db

Final configuration

This final configuration will link the generated keys with the Mender server. All the modifications will be in the prod.yml file.

  • Locate the storage-proxy service in prod.yml and set it to your domain name. In our case under the networks.mender.aliases
  • Locate the minio service. Set MINIO_ACCESS_KEY to “mender-deployments” and the MINIO_SECRET_KEY to a generated password (with e.g.: $ apg -n1 -a0 -m32)
  • Locate the mender-deployments service. Set DEPLOYMENTS_AWS_AUTH_KEY and DEPLOYMENTS_AWS_AUTH_SECRET to respectively the value of MINIO_ACCESS_KEY and MINIO_SECRET_KEY. Set DEPLOYMENTS_AWS_URI to point to your domain such as

Start the server

Make sure that the domain names you have defined ( and are accessible, potentially by adding them to /etc/hosts if you’re just testing.

  • Start the server
  • $ ./run up -d
  • If it is a new installation, request initial user login:
  • $ curl -X POST  -D - --cacert keys-generated/certs/api-gateway/cert.crt
  • Check that you can create a user and login to mender UI:
  •  $ firefox 

Client side – Yocto Project

Mender has a Yocto Project layer to easily interface with your own layer.
We will see how to customize your layer and image components (U-Boot, Linux kernel) to correctly configure it for Mender use.

In this section, we will assume that you have your own U-Boot and your own kernel repositories (and thus, recipes) and that you retrieved the correct branch of this layer.

Machine and distro configurations

  • Make sure that the kernel image and Device Tree files are installed in the root filesystem image
  • RDEPENDS_kernel-base += "kernel-image kernel-devicetree"
  • Update the distro to inherit the mender-full class and add systemd as the init manager (we only tested Mender’s integration with systemd)
  • # Enable systemd for Mender
    DISTRO_FEATURES_append = " systemd"
    VIRTUAL-RUNTIME_init_manager = "systemd"
    VIRTUAL-RUNTIME_initscripts = ""
    INHERIT += "mender-full"
  • By default, Mender assumes that your storage device is /dev/mmcblk0, that mmcblk0p1 is your boot partition (containing the bootloader), that mmcblk0p2 and mmcblk0p3 are your two root filesystem partitions, and that mmcblk0p5 is your data partition. If that’s the case for you, then everything is fine! However, if you need a different layout, you need to update your machine configuration. Mender’s client will retrieve which storage device to use by using the MENDER_STORAGE_DEVICE variable (which defaults to mmcblk0). The partitions themselves should be specified using MENDER_BOOT_PART, MENDER_ROOTFS_PART_A, MENDER_ROOTFS_PART_B and ROOTFS_DATA_PART. If you need to change the default storage or the partitions’ layout, edit in your machine configuration the different variables according to your need. Here is an example for /dev/sda:
  • MENDER_STORAGE_DEVICE = "/dev/sda"
  • Do not forget to update the artifact name in your local.conf, for example:

    MENDER_ARTIFACT_NAME = "release-1"

As described in Mender’s documentation, Mender will store the artifact name in its artifact image. It must be unique which is what we expect because an artifact will represent a release tag or a delivery. Note that if you forgot to update it and upload an artifact with the same name as an existing in the web UI, it will not be taken into account.

U-Boot configuration tuning

Some modifications in U-Boot are necessary to be able to perform the rollback (use a different partition after an unsuccessful update)

  • Mender needs BOOTCOUNT support in U-Boot. It creates a bootcount variable that will be incremented each time a reboot appears (or reset to 1 after a power-on reset). Mender will use this variable in its rollback mechanism.
    Make sure to enable it in your U-Boot configuration. This will most likely require a patch to your board .h configuration file, enabling:
  • Remove environment variables that will be redefined by Mender. They are defined in Mender’s documentation.
  • Update your U-Boot recipe to inherit Mender’s one and make sure to provide U-Boot virtual package (using PROVIDES)
  • # Mender integration
    require recipes-bsp/u-boot/
    PROVIDES += "u-boot"
    RPROVIDES_${PN} += "u-boot"
    BOOTENV_SIZE = "0x20000"

    The BOOTENV_SIZE must be set the same content as the U-Boot CONFIG_ENV_SIZE variable. It will be used by the u-boot-fw-utils tool to retrieve the U-Boot environment variables.

    Mender is using u-boot-fw-utils so make sure that you have a recipe for it and that Mender include’s file is included. To do that, you can create a bbappend file on the default recipe or create your own recipe if you need a specific version. Have a look at Mender’s documentation example.

  • Tune your U-Boot environment to use Mender’s variables. Here are some examples of the modifications to be done. Set the root= kernel argument to use ${mender_kernel_root}, set the bootcmd to load the kernel image and Device Tree from ${mender_uboot_root} and to run mender_setup. Make sure that you are loading the Linux kernel image and Device Tree file from the root filesystem /boot directory.
    setenv bootargs 'console=${console} root=${mender_kernel_root} rootwait'
    setenv mmcboot 'load ${mender_uboot_root} ${fdt_addr_r} boot/my-device-tree.dtb; load ${mender_uboot_root} ${kernel_addr_r} boot/zImage; bootz ${kernel_addr_r} - ${fdt_addr_r}'
    setenv bootcmd 'run mender_setup; run mmcboot'

Mender’s client recipe

As stated in the introduction, Mender has a client, in the form of a userspace application, that will be used on the target. Mender’s layer has a Yocto recipe for it but it does not have our server certificates. To establish a connection between the client and the server, the certificates have to be installed in the image. For that, a bbappend recipe will be created. It will also allow to perform additional Mender configuration, such as defining the server URL.

  • Create a bbappend for the Mender recipe
  • FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
    SRC_URI_append = " file://server.crt"
  • Copy your server certificates in the bbappend recipe folder

Recompile an image and now, we should have everything we need to be able to update an image. Do not hesitate to run the integration checklist, it is really a convenient way to know if everything is correctly configured (or not).

If you want to be more robust and secure, you can sign your artifacts to be sure that they come from a trusted source. If you want this feature, have a look at this documentation.


Standalone mode

To update an artifact using the standalone mode (i.e. without server), here are the commands to use. You will need to update them according to your needs.

  • On your work station, create a simple HTTP server in your Yocto deploy folder:
  • $ python -m SimpleHTTPServer
  • On the target, start mender in standalone mode
  • $ mender -log-level info -rootfs

    You can also use the mender command to start an update from a local .mender file, provided by a USB key or SD card.

  • Once finished, you will have to reboot the target manually
  • $ reboot

    After the first reboot, you will be on the the new active partition (if the previous one was /dev/mmcblk0p2, you should be on /dev/mmcblk0p3). Check the kernel version, artifact name or command line:

    $ uname -a
    $ cat /etc/mender/artifact_info
    $ cat /proc/cmdline

    If you are okay with this update, you will have to commit the modification otherwise the update will not be persistent and once you will reboot the board, Mender will rollback to the previous partition:

    $ mender -commit

Using Mender’s server UI

The Mender server UI provides a management interface to deploy updates on all your devices. It knows about all your devices, their current software version, and you can plan deployments on all or a subset of your devices. Here are the basic steps to trigger a deployment:

  • Login (or create an account) into the mender server UI:
  • Power-up your device
  • The first time, you will have to authorize the device. You will find it in your “dashboard” or in the “devices” section.
  • After authorizing it, it will retrieve device information such as current software version, MAC address, network interface, and so on
  • To update a partition, you will have to create a deployment using an artifact.
  • Upload the new artifact in the server UI using the “Artifacts” section
  • Deploy the new artifact using the “deployment” or the “devices” section. You will retrieve the status of the deployment in the “status” field. It will be in “installing”, “rebooting”, etc. The board will reboot and the partition should be updated.


Here are some issues we faced when we integrated Mender for our device. The Mender documentation also has a troubleshooting section so have a look at it if you are facing issues. Otherwise, the community seems to be active even if we did not need to interact with it as it worked like a charm when we tried it.

Update systemd’s service starting

By default, the Mender systemd service will start after the service “resolved” the domain name. On our target device, the network was available only via WiFi. We had to wait for the wlan0 interface to be up and configured to automatically connect a network before starting Mender’s service. Otherwise, it leads to an error due to the network being unreachable. To solve this issue which is specific to our platform, we set the systemd dependencies to “” to be sure that a network is available:


It now matches our use case because the Mender service will start only if the wlan0 connection is available and working.

Certificate expired

The certificates generated and used by Mender have a validity period. In case your board does not have a RTC set, Mender can fail with the error:

systemctl status mender
... level=error msg="authorize failed: transient error: authorization request failed: failed to execute authorization request:
Post https:///api/devices/v1/authentication/auth_requests: x509: certificate has expired or is not yet valid" module=state

To solve this issue, update the date on your board and make sure your RTC is correctly set.

Device deletion

While testing Mender’s server (version 1.0), we always used the same board and got into the issue that the board was already registered in the Server UI but had a different Device ID (which is used by Mender to identify devices). Because of that, the server was always rejecting the authentication. The next release of the Mender server offers the possibility to remove a device so we updated the Mender’s server to the last version.

Deployments not taken into account

Note that the Mender’s client is checking by default every 30 minutes if a deployment is available for this device. During testing, you may want to reduce this period, which you can in the Mender’s configuration file using its UpdatePollIntervalSeconds variable.


Mender is an OTA updater for Embedded devices. It has a great documentation in the form of tutorials which makes the integration easy. While testing it, the only issues we got were related to our custom platform or were already indicated in the documentation. Deploying it on a board was not difficult, only some U-Boot/kernel and Yocto Project modifications were necessary. All in all, Mender worked perfectly fine for our project!

Ideas from CBC Radio (Highlights): Confronting the 'perfect storm': How to feed the future

We're facing what could be a devastating crisis—how to feed ourselves without destroying the ecosystems we depend on. In partnership with the Arrell Food Institute at the University of Guelph we seek out creative solutions to a looming disaster.

Daniel Lemire's blog: Fast software is a discipline, not a purpose

When people train, they usually don’t try to actually run faster or lift heavier weights. As a relatively healthy computer science professor, how fast I run or how much I can lift is of no practical relevance. However, whether I can walk the stairs without falling apart is a metric.

I am not an actor or a model. Who cares how much I weight? I care: it is a metric.

I could probably work in a dirty office without ill effect, but I just choose not to.

So when I see inefficient code, I cringe. I am being told that it does not matter. Who cares? We have plenty of CPU cycles. I think you should care, it is a matter of discipline.

Yes, only about 1% of all code being written really matters. Most people write code that may as well be thrown out.

But then, I dress cleanly every single day even if I stay at home. And you should too.

I do not care which programming language you use. It could be C, it could be JavaScript. If your code is ten times slower than it should, I think it shows that you do not care, not really. And it bothers me. It should bother you because it tells us something about your work. It is telling us that you do not care, not really.

Alexander Jay sent me a nice email. He reviewed some tricks he uses to write fast code. It inspired me these recommendations:

  • Avoid unnecessary memory allocations.
  • Avoid multiple passes over the data when one would do.
  • Avoid unnecessary runtime inferences and branches.
  • Avoid unnecessary performance-adverse abstraction.
  • Prefer simple value types when they suffice.
  • Learn how the data is actually represented in bits, and learn to dance with these bits when you need to.

Alexander asked me “At what point would you consider the focus on optimization a wasted effort? ” My answer: “At what point do you consider being fit and clean a wasted effort?”

There is a reason we don’t tend to hire people who show up to work in dirty t-shirts. It is not that we particularly care about dirty t-shirts, it is that we want people who care about their work.

If you want to show care for your software, you first have to make it clean, correct and fast. If you start caring about bugs and inefficiencies, you will write good software. It is that simple.


Waterloo, Ontario, Canada University of Waterloo Wed, 15 Nov 2017 15:40:05 -0800

IEEE Job Site RSS jobs: Faculty Position in COMPUTER SYSTEMS SOFTWARE

Waterloo, Ontario, Canada University of Waterloo Wed, 15 Nov 2017 15:39:52 -0800

The Shape of Code: À la carte Entropy

My observation that academics treat Entropy as the go-to topic, when they have no idea what else to talk about, has ruffled a few feathers. David Clark, one of the organizers of a workshop on Information Theory and Software Testing has invited me to give a talk on Entropy (the title is currently Entropy for the uncertain, but this state might change :-).

Complaining about the many ways entropy is currently misused in software engineering would be like shooting fish in a barrel, and equally pointless. I want to encourage people to use entropy in a meaningful way, and to stop using Shannon entropy just because it is the premium brand of entropy.

Shannon’s derivation of the iconic formula -sum{}{}{p_i log{p_i}} depends on various assumptions being true. While these conditions look like they might hold for some software engineering problems, they clearly don’t hold for others. It may be possible to use other forms of entropy for some of these other problems; Shannon became the premium brand of entropy because it was first to market, the other entropy products have not had anyone championing their use, and academics follow each other like sheep (it’s much easier to get a paper published by using the well-known brands).

Shannon’s entropy has been generalized, with the two most well-known being (in the limit q right 1, both converge to Shannon entropy):

Rényi entropy in 1961: {1}/{1-q}log(sum{i}{}{{p_i}^q})

Tsallis entropy in 1988: {1/{q - 1}} ( 1 - sum{i}{}{{p_i}^q})

All of these formula reduce a list of probabilities to a single value. A weighting is applied to each probability, and this weighted value is summed to produce a single value that is further manipulated. The probability weighting functions are plotted below:

Probability weighting function

Under what conditions might one these two forms of entropy be used (there other forms)? I have been rummaging around looking for example uses, and could not find many.

There are some interesting papers about possible interpretations of the q parameter in Tsallis entropy: the most interesting paper I have found shows a connection with the correlation between states, e.g., preferential attachment in networks. This implies that Tsallis entropy is the natural first candidate to consider for systems exhibiting power law characteristics. Another paper suggests q != 1 derives from variation in the parameter of an exponential equation.

Some computer applications: a discussion of Tsallis entropy and the concept of non-extensive entropy, along with an analysis of statistical properties of hard disc workloads, the same idea applied to computer memory.

Some PhD thesis: Rényi entropy, with q=2, for error propagation in software architectures, comparing various measures of entropy as a metric for the similarity of program execution traces, plus using Rényi entropy in cryptography

As you can see, I don’t have much to talk about. I’m hoping my knowledgeable readers can point me at some uses of entropy in software engineering where the author has put some thought into which entropy to use (which may have resulted in Shannon entropy being chosen; I’m only against this choice when it is made for brand name reasons).

Registration for the workshop is open, so turn up and cheer me on.

Roll your own weighting plot:

p_vals=seq(0.001, 1.001, by=0.01)
plot(p_vals, -p_vals*log(p_vals), type="l", col="red",
	ylim=c(0, 1),
	xaxs="i", yaxs="i",
	xlab="Probability", ylab="Weight")
lines(p_vals, p_vals^q, type="l", col="blue")
lines(p_vals, p_vals^q, type="l", col="green")

Quiet Earth: New on Blu-ray and DVD! November 14, 2017

Folks, it's a huge week for home releases so let's just dive right in, shall we? First up is this great set of Romero films from Arrow called "George A. Romero Between Night and Dawn" which collects all the fledgling filmmaker's films following Night of the Living Dead.

Titles in the set include Always Vanilla, Romero's sophomore 1971 directorial effort, 1972's Season of the Witch and The Crazies which saw Romero returning to more straight horror territory as a small rural town finding itself in the grip of an infection which send its hosts into a violent, h [Continued ...]

Quiet Earth: Your Must-See Apocalyptic Vision of 2017: JUNK HEAD [Review]

No surprise: the future is an ugly, lonely place. Humanity has long moved away from physical activity and interaction, passing those tasks onto an army of clones who after years of servitude, went on to overthrow their masters and took refuge underground.

In the 1,200 years since abandoning humanity, the clones have continued to evolve whereas humanity has fallen into further disarray until finally, it seems the human race is on the brink of collapse. In a final effort to keep themselves alive, the humans send an explorer underground. His mission is to track down a subterranean creature which the scientists believe holds the key to keeping humanity alive and so our explorer travels underground and embarks on the adventure of a lifetime.

Junk Head sounds like a must-see ap [Continued ...]

Ansuz - mskala's home page: Installing Slackware on a Dell Inspiron i3162-2040

My venerable Asus eeePC netbook finally gave up the ghost, and I replaced it with a Dell Inspiron i3162-2040. Here are some notes on what I had to do to get it up and running with Slackware Linux, both for my own future reference if I ever have to reinstall from scratch, and to help others who may be facing a similar adventure. / 2017-11-22T01:26:36