Slashdot: Notre Dame Official Says 'Computer Glitch' Could Be Fire Culprit

A "computer glitch" may have been behind the fast-spreading fire that ravaged Notre Dame, Associated Press reported Friday, citing the cathedral's rector. From the report: Speaking during a meeting of local business owners, rector Patrick Chauvet did not elaborate on the exact nature of the glitch, adding that "we may find out what happened in two or three months." On Thursday, Paris police investigators said they think an electrical short-circuit most likely caused the fire. French newspaper Le Parisien has reported that a fire alarm went off at Notre Dame shortly after 6 p.m. Monday but a computer bug showed the fire's location in the wrong place. The paper reported the flames may have started at the bottom of the cathedral's giant spire and may have been caused by an electrical problem in an elevator. Chauvet said there were fire alarms throughout the building, which he described as "well protected."

Read more of this story at Slashdot.

Recent additions: simple-ltl 0.1.0.0

Added by JohnWiegley, Fri Apr 19 18:00:43 UTC 2019.

A simple LTL checker

Slashdot: Microsoft Debuts Bosque, a New Programming Language With No Loops, Inspired by TypeScript

Microsoft has introduced a new open source programming language called Bosque that aspires to be simple and easy to understand by embracing algebraic operations and shunning techniques that create complexity. From a report: Bosque was inspired by the syntax and types of TypeScript and the semantics of ML and Node/JavaScript. It's the brainchild of Microsoft computer scientist Mark Marron, who describes the language as an effort to move beyond the structured programming model that became popular in the 1970s. The structured programming paradigm, in which flow control is managed with loops, conditionals, and subroutines, became popular after a 1968 paper titled "Go To Statement Considered Harmful" by computer scientist Edsger Dijkstra. Marron believes we can do better by getting rid of sources of complexity like loops, mutable state, and reference equality. The result is Bosque, which represents a programming paradigm that Marron, in a paper he wrote, calls "regularized programming."

Read more of this story at Slashdot.

Bifurcated Rivets: From FB

This is fun

Bifurcated Rivets: From FB

More goodness

Bifurcated Rivets: From FB

Rather good.

Bifurcated Rivets: From FB

Mr Cunningham at his best.

Bifurcated Rivets: From FB

JD 1975

ScreenAnarchy: ***PENDING**** The return of a true rebel: PEE-WEE's BIG ADVENTURE score to be re-released on vinyl

So vinyl is not only more popular than ever but we are living in an era in which movie scores are enjoying a very good health as well, so it's not a surprise that a few of the more popular and iconic scores of Hollywood are coming back (again that is...) in this format of old. The next one that is going to hit the streets is Danny Elfman's score for the first full length feature directed by his long-time pal, Tim Burton, Pee-wee's Big Adventure, that was released almost 34 years ago! The score, that was released in a limited edition last year to mark the 30th anniversary of the original issue of Varése Sarabande's compact disc, cassette and first-time vinyl editions, will be...

[Read the whole post on screenanarchy.com...]

ScreenAnarchy: Cannes 2019: Malick, Jarmusch, Herzog, Ferrara, Bong Joon-Ho, Almodóvar and Refn Head to La Croisette

The veil over the 72nd edition of the Cannes had been lifted as the festival´s director Thierry Frémaux revealed the main competition line-up. The upcoming edition will boast a slate of seasoned veterans along with promising talents in the ranks up-and-coming filmmakers.

[Read the whole post on screenanarchy.com...]

MetaFilter: that which man had made to hunt himself

an entire pack of Boston Dynamics robot dogs

It only takes 10 Spotpower (SP) to haul a truck across the Boston Dynamics parking lot (~1 degree uphill, truck in neutral).

title via BoingBoing post

Slashdot: Millions of Rehab Records Exposed on Unsecured Database

Records for potentially tens of thousands of patients seeking treatment at several addiction rehabilitation centers were exposed in an unsecured online database, an independent researcher revealed Friday. From a report: The 4.91 million documents included patients' names, as well as details of the treatments they received, according to Justin Paine, the researcher. Each patient had multiple records in the database, and Paine estimates that the records may cover about 145,000 patients. Paine notified the main treatment center, as well as the website hosting company, when he discovered the database. The data has since been made unavailable to the public. Paine found the data by typing keywords into the Shodan search engine that indexes servers and other devices that connect to the internet. "Given the stigma that surrounds addiction this is almost certainly not information the patients want easily accessible," Paine said in a blog post that he shared with CNET ahead of publication. Paine hunts for unsecured databases in his free time. His day job is head of trust and safety at web security company Cloudflare. The find is the latest example of a widespread problem: Any organization can easily store customer data on cloud-based services now, but few have the expertise to set them up securely. As a result, countless unsecured databases sit online and can be found by anyone with a few search skills. Many of those databases are full of sensitive personal data.

Read more of this story at Slashdot.

MetaFilter: R-E-S-P-E-C-T

Respect Is Coming
Respect World.

Open Culture: Street Art for Book Lovers: Dutch Artists Paint Massive Bookcase Mural on the Side of a Building

Bookcases are a great ice breaker for those who love to read.

What relief those shelves offer ill-at ease partygoers... even when you don't know a soul in the room, there’s always a chance you’ll bond with a fellow guest over one of your hosts’ titles.

Occupy yourself with a good browse whilst waiting for someone to take the bait.

Now, with the aid of Dutch street artists Jan Is De Man and Deef Feed, some residents of Utrecht have turned their bookcases into street art, sparking conversation in their culturally diverse neighborhood.

De Man, whose close friends occupy the ground floor of a building on the corner of Mimosastraat and Amsterdam, had initially planned to render a giant smiley face on an exterior wall as a public morale booster, but the shape of the three-story structure suggested something a bit more literary.

The trompe-l'oeil Boekenkast (or bookcase) took a week to create, and features titles in eight different languages.

Look closely and you’ll notice both artists’ names (and a smiley face) lurking among the spines.

Design mags may make an impression by ordering books according to size and color, but this communal 2-D boekenkast looks to belong to an avid and omnivorous reader.

Some English titles that caught our eye:

Sapiens

The Subtle Art of Not Giving a F*ck

Keith Richards’ autobiography Life

The Curious Incident of the Dog in the Nighttime 

Pride and Prejudice

The Little Prince

The World According to Garp

Jumper

And a classy-looking hardbound Playboy collection that may or may not exist in real life.

(Readers, can you spot the other fakes?)

Boekenkast is the latest of a number of global bookshelf murals tempting literary pilgrims to take a selfie on the way to the local indie bookshop.

via Bored Panda

Related Content:

Japanese Artist Creates Bookshelf Dioramas That Magically Transport You Into Tokyo’s Back Alleys

157 Animated Minimalist Mid-Century Book Covers

David Bowie Songs Reimagined as Pulp Fiction Book Covers: Space Oddity, Heroes, Life on Mars & More

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Join her in New York City this May for the next installment of her book-based variety show, Necromancers of the Public Domain. Follow her @AyunHalliday.

Street Art for Book Lovers: Dutch Artists Paint Massive Bookcase Mural on the Side of a Building is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Slashdot: Windows 8 Will No Longer Get App Updates After This Summer

An anonymous reader shares a report: Last year, Microsoft announced when it would be killing app updates and distribution in the Windows Store for Windows Phone 8.x and Windows 8.x. At the time, the blog post stated that Windows Phone 8.x devices would stop receiving app updates after July 1, 2019, while Windows 8.x devices would get app updates through July 1, 2023. However, it seems as though plans have changed a little bit, as the blog post has quietly been updated earlier this month. Microsoft has changed the wording in the post to state that Windows 8 devices will stop getting updates for their apps at the same time as Windows Phone 8.x, that is, July 1 of this year. Windows 8.1 devices will continue to receive updates through the previously announced date in 2023.

Read more of this story at Slashdot.

ScreenAnarchy: EXCLUSIVE: Nina Menkes' QUEEN OF DIAMONDS Restoration Gets A New Trailer From Arbelos

A new distributor on the block is making big waves with their acquisition and restorations of classic film from around the world. From the ashes of Cinelicious Pics has risen Arbelos, who've already worked on stunning restorations of Dennis Hopper's The Last Movie and Bela Tarr's Satanatango, and among their many exciting upcoming projects we now have Nina Menkes' '90s feminist touchstone, Queen of Diamonds to look forward to. Queen of Diamonds will open with a run at BAM in New York on April 26th, followed by an LA opening on June 15th. Both locations will have opportunities to engage with Menkes in the form of Q&A's as well as the chance to attend Menkes' Sex and Power: The Visual Language of Oppression talk on...

[Read the whole post on screenanarchy.com...]

Colossal: Imitation China Plates and Layered Cut Paper Animals Explore the Sculptural Potential of Paper in a New Exhibition at Paradigm Gallery

Miniature paper work by Nayan and Vaishali, all images courtesy of Paradigm Gallery

Miniature paper work by Nayan and Vaishali, all images courtesy of Paradigm Gallery

Subtle manipulations, intricate cuts, and ornate collages are a few of the various ways contemporary artists are transforming paper today. These techniques and more are displayed in the upcoming exhibition pa•per, curated by Paradigm Gallery co-founder Jason Chen and featuring artists outside of the gallery’s roster. The list includes Nayan and Vaishali (previously), the India-based duo who spend 4-6 hours a day crafting precisely sliced and painted miniature animals. Kent-based artist Sally Hewitt creates the illusion of a body’s impression on cartridge paper by gently prodding the material with needles, bodkins, and embossing tools. Other included artists like Danielle Krysa and Lizzy Gill use collage, while Rosa Leff cuts traditional patterns and imagery found on fine china into cheap paper plates. The exhibition, hosted at Paradigm Gallery in Philadelphia, opens on April 26 and runs through May 18, 2019.

Danielle Krysa

Danielle Krysa

Lizzy Gill

Lizzy Gill

Sally Hewitt

Sally Hewitt

Nayan and Vaishali

Nayan and Vaishali

Rosa Leff

Rosa Leff

Albert Chamillard

Lucha Rodríguez

Lucha Rodríguez

Daria Aksenova

Daria Aksenova

Open Culture: Art Installation Dramatically Sheds Light on the Catastrophic Impact of Rising Sea-Levels

What does it accomplish to talk about climate change? Even those who talk about climate change professionally might find it hard to say. If you really want to make a point about rising sea levels — not to mention all the other changes predicted to afflict a warming Earth — you might do better to show, not tell. That reasoning seems to have motivated art projects like the giant hands reaching out from the waters of Venice previously featured here on Open Culture, and it looks even clearer in the more recent case of Lines (57° 59 ?N, 7° 16 ?W), an installation now on display on a Scottish island.

All images courtesy of Timo Aho and Pekka Niittyvirta

"At high tide, three synchronized lines of light activate in the Outer Hebrides off the west coast of Scotland," writes Designboom's Zach Andrews, and in the dark, "wrap around two structures and along the base of a mountain landscape.

Everything below these lines of light will one day be underwater." Created by Finnish artists Pekka Niittyvirta and Timo Aho for Taigh Chearsabhagh Museum & Arts CentreLines (57° 59 ?N, 7° 16 ?W) offers a stark reminder of the future humanity faces if climate change goes on as projected.

But why put up an installation of such apparent urgency in such a thinly populated, out-of-the-way place? "Low lying archipelagos like this one are especially vulnerable to the catastrophic effects of climate change," Andrews writes, adding that the Taigh Chearsabhagh Museum & Arts Centre itself "cannot even afford to develop on its existing site anymore due to the predicted rise of storm surge sea." But though the effects of rising sea levels may be felt first on islands like these, few predictions have those effects stopping there; worst-case scenarios won't spare our major metropolises, and certainly not the coastal ones.

You can get a sense of what Lines (57° 59 ?N, 7° 16 ?W) looks like in action from the photographs on Niittyvirta's site a well as the time-lapse video at the top, which shows the lines of light activating when their sensors detect high tide, then only those lines of light remaining by the time the sun has gone completely down. To experience the full impact of the installation, however, requires seeing it in person in the context for which it was created. So if you've been putting off that trip to the Outer Hebrides, now might be the time to finally take it — not just because of Niittyvirta and Aho's work, but because in a few years, it may not be quite the same place.

via Colossal

Related Content:

Animations Show the Melting Arctic Sea Ice, and What the Earth Would Look Like When All of the Ice Melts

Huge Hands Rise Out of Venice’s Waters to Support the City Threatened by Climate Change: A Poignant New Sculpture

Music for a String Quartet Made from Global Warming Data: Hear “Planetary Bands, Warming World”

A Song of Our Warming Planet: Cellist Turns 130 Years of Climate Change Data into Music

A Map Shows What Happens When Our World Gets Four Degrees Warmer: The Colorado River Dries Up, Antarctica Urbanizes, Polynesia Vanishes

A Century of Global Warming Visualized in a 35 Second Video

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

Art Installation Dramatically Sheds Light on the Catastrophic Impact of Rising Sea-Levels is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Slashdot: HDD Shipments Fell Nearly 13% in the First Quarter of 2019, 18% Since Last Year

Suren Enfiajyan writes: HDD shipments are continuing to decline. This is about all major HDD vendors with WDC with the most decline yearly -- 26.1% against 11.3% (Toshiba) and 14.4% (Seagate). Desktop HDD shipments are said to have fallen to just 24.5 million units, a drop of nearly 4 million units from the previous quarter. Laptop HDD shipments dropped more than 6 million units to hit the 37 million mark. Enterprise HDDs are said to have rebounded by nearly 1 million units, however, to around 11.5 million hard drives purchased in the quarter. Business customers essentially picked up the slack left by consumers. These shipments were likely affected by many factors. But there's also the simple fact that most people want SSDs instead of HDDs for most of their devices. Nobody wants to wait for their system to boot, their files to load, or their apps to finish routine tasks.

Read more of this story at Slashdot.

Penny Arcade: Comic: Furious

New Comic: Furious

MetaFilter: Lip Liners: Writers on the Power of Red Lipstick

"I have worn lipstick since long before he was born; every day, for many years. I can't remember, though, when habit became ritual. I feel as though if I could, if I could pin down the moment that commenced a daily ceremony, I might demarcate between girl and woman with clear, metaphoric ease. But when and how do you become a woman? It is a long, raw process that doesn't seem to end." That's Jessica Friedmann, one of a dozen writers included in this round-up from Longreads: When Lips Speak for Themselves: A Reading List on Red Lipstick.

The list:

* The History of Red Lipstick, From Ancient Egypt to Taylor Swift & Everything In Between (Marlen Komar, November 2016, Bustle)

* Someone Called Mother (Marcia Aldrich and Jill Talbot, March 2019, Longreads)

* On Blood, Birth, and The Talismanic Power of Red Lipstick (Jessica Friedmann, April 2018, Literary Hub)

* Incarnadine, the Bloody Red of Fashionable Cosmetics and Shakespearean Poetics (Katy Kelleher, March 2018, The Paris Review)

* Why Wearing Lipstick Is a Small Act of Joyful Resistance (Erika Thorkelson, October 2018, The Walrus)

* The Undeniable Power of Red Lipstick (Danielle Decker, March 2018, Medium)

* Five Writers Unpack the Power of Red Lipstick (January 2019, Elle)

MetaFilter: "Derry tonight. Absolute madness"

Northern Irish journalist Lyra McKee killed by gunfire amid clashes between police and dissident republican forces in Derry. Northern Irish police believe the New IRA are responsible for the killing and have opened a murder investigation. McKee had been a rising journalistic star: She had been named one of Forbes Europe's 30 under 30 in media in 2016 and had a two-book deal with Faber. Her writing focused on, among other things, her own memories of growing up gay in Belfast (which became a short film), the surge in suicide rates in Northern Ireland in the years following the Good Friday Agreement, efforts by families of those killed during the Troubles to find answers, and the still-fragile power-sharing agreement between unionist and republican factions in Northern Ireland. McKee was 29 years old.

Tributes to McKee are still coming out. A selection: Post title quotes McKee's last tweet before she was killed (no link, as her account has been taken private).

MetaFilter: A Brief History Of Cooties

A Brief History of Cooties, courtesy of the Smithsonian Why a 100-year-old game is still spreading across our playgrounds. (Reading this article reminded me I actually had this game when I was a kid. How odd.)

Recent additions: lsp-test 0.5.1.1

Added by AlanZimmerman, Fri Apr 19 12:43:56 UTC 2019.

Functional test framework for LSP servers.

Recent additions: termbox-banana 0.1.1

Added by mitchellwrosen, Fri Apr 19 12:20:00 UTC 2019.

reactive-banana + termbox

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Photographer Spotlight: Katharina Bayer

Katharina Bayer

Katharina Bayer

 

 

Katharina Bayer

 

 

Katharina Bayer

 

 

Katharina Bayer

 

 

Katharina Bayer

 

 

Katharina Bayer

 

 

Katharina Bayer

 

 

Katharina Bayer

 

 

Katharina Bayer’s Website

Katharina Bayer on Instagram

Recent additions: haskell-lsp 0.9.0.0

Added by AlanZimmerman, Fri Apr 19 12:01:27 UTC 2019.

Haskell library for the Microsoft Language Server Protocol

Recent additions: haskell-lsp-types 0.9.0.0

Added by AlanZimmerman, Fri Apr 19 12:01:10 UTC 2019.

Haskell library for the Microsoft Language Server Protocol, data types

Colossal: A Suspended Neon Net Invites Guests to Bounce Stories Above a Paris Shopping Center

A circular net in a bright shades of neon greens, yellows, and pinks hovers above the Paris-based shopping complex Galeries Lafayette Paris Haussmann in a new installation to celebrate the impending arrival of summer. The suspended playground gives visitors a chance to at once lay underneath the brilliant dome at the center of the building, while also watching shoppers bustling on the ground floor below. The installation is a part of the store’s Funorama initiative which in addition to the central play area, also includes “fun zones” such as old school arcade games, a VR experience, and foosball. Galeries Lafayette Paris Haussmann invites guests to play, bounce, and lounge on the colorful structure through June 9, 2019. (via fubiz)

ScreenAnarchy: Dustin Ferguson's retro-Epic “Moon of The Blood Beast” headed to VHS

Dustin Ferguson's retro-Epic “Moon of The Blood Beast” starring D.T. Carney, Vida Ghaffari, Dawna Lee Heising, Mike Ferguson, Alan Maxson and Alana Evans headed to VHS   Award winning filmmaker Dustin Ferguson announces plans to release newest film on various formats, including VHS.   As production comes to an end on Ferguson's 70th feature film “Moon of The Blood Beast”, details emerge regarding the release. A limited edition VHS edition from Nemesis Video (who previously released Ferguson's “Blood Claws” and “Shockumentary”) will drop in the next couple months, followed by a streaming release through Troma Now (“The Toxic Avenger”). DVD details are still under wraps but they expect to have a release date soon.   The film is about a small coastal town terrorized by the...

[Read the whole post on screenanarchy.com...]

Open Culture: Billy Collins Teaches Poetry in a New Online Course

In its latest release, Masterclass has launched a new course, "Billy Collins Teaches Reading and Writing Poetry," which they describe in the trailer above and the text below. You can sign up here. The cost is $90. Or pay $180 and get an annual pass to their entire catalogue of courses covering a wide range of subjects--everything from filmmaking (Werner Herzog, David Lynch, Martin Scorsese), to acting (Helen Mirren) and creative writing (Margaret Atwood), to taking photographs (Annie Leibovitz) and writing plays (David Mamet). Each course is taught by an eminent figure in their field.

Known for his wit, humor, and profound insight, Billy is one of the best-selling and most beloved contemporary poets in the United States. He regularly sells out poetry readings, frequently charms listeners on NPR’s A Prairie Home Companion, and his work has appeared in anthologies, textbooks, and periodicals around the world.

Called “America’s Favorite Poet” by the Wall Street Journal, Billy served two terms as U.S. Poet Laureate and is also a former New York State Poet Laureate. He’s been honored with the Mark Twain Prize for Humor in Poetry and a number of prestigious fellowships. He’s taught at Columbia University, Sarah Lawrence, and Lehman College, and he’s also a distinguished professor at the City University of New York. Now he’s teaching his first-ever MasterClass.

In his MasterClass on Reading and Writing Poetry, Billy teaches you the building blocks of poems and their unique power to connect reader and writer. From subject and form to rhyme and meter, learn to appreciate the pleasures of a well-turned poem. Discover Billy’s philosophy on the craft of poetry and learn how he creates a poet’s persona, incorporates humor, and lets imagination lead the way. By breaking down his own approach to composing poetry and enjoying the work of others, Billy invites students to explore the gifts poetry has to offer.

In this online poetry class, you’ll learn about:
• Using humor as a serious strategy
• The fundamental elements of poetry
• Billy’s writing process
• Turning a poem
• Exploring subjects
• Rhyme and meter
• Sound pleasures
• Finding your voice
• Using form to engage readers
• The visual distinctions of poetry

FYI: If you sign up for a MasterClass course by clicking on the affiliate links in this post, Open Culture will receive a small fee that helps support our operation.

Billy Collins Teaches Poetry in a New Online Course is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Open Culture: The Mueller Report Is #1, #2 and #3 on the Amazon Bestseller List: You Can Get It Free Online

Peruse the Amazon bestselling book list and you'll find that the long-awaited Mueller Report is not just the #1 bestseller. It's also the #2 bestseller and the #3 bestseller. Collusion and obstruction--it's the stuff that makes for good book sales, it appears.

You can pre-order the Mueller Report in book, ebook and even audio book formats via the links above. But if you want to download the report for free, and start reading it asap, simply head to the Washington Post and New York Times. Or go straight to the source at the Justice Department web site. Politico has a searchable PDF version here.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

The Mueller Report Is #1, #2 and #3 on the Amazon Bestseller List: You Can Get It Free Online is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Explosm.net: Comic for 2019.04.19

New Cyanide and Happiness Comic

ScreenAnarchy: Notes on Streaming: THE RUTHLESS Takes Criminal Aim, RAMY Takes a Spiritual Journey

Plus: 'Blue My Mind' shakes up Shudder.

[Read the whole post on screenanarchy.com...]

new shelton wet/dry: ho, ho, ho, pimp

Where Does Time Go When You Blink? Retinal input is frequently lost because of eye blinks, yet humans rarely notice these gaps in visual input. […] Here, we investigated whether the subjective sense of time is altered by spontaneous blinks. […] The results point to a link between spontaneous blinks, previously demonstrated to induce activity suppression [...]

new shelton wet/dry: The ball is round, the game is long

Two alternative hypotheses have been proposed to explain why grunting in tennis may impede opponents’ predictions, referred to as the distraction account (i.e., grunts capture attentional resources necessary for anticipation) and the multisensory integration account (i.e., auditory information from the grunt systematically influences ball trajectory prediction typically assumed to rely on visual information). […] our [...]

new shelton wet/dry: How you gonna do it if you really don’t want to dance, by standing on the wall?

Loie Fuller (1862-1928) conquered Paris on her opening night at the Folies-Bergère on November 5, 1892. Manipulating with bamboo sticks an immense skirt made of over a hundred yards of translucent, iridescent silk, the dancer evoked organic forms –butterflies, flowers, and flames–in perpetual metamorphosis through a play of colored lights. Loie Fuller’s innovative lighting effects, [...]

new shelton wet/dry: Bene ascolta chi la nota

In a series of experiments, students listened to stories and then took a test of how much information they remembered an hour later. Their recall spiked by 10 to 30 percent if they had been randomly assigned to sit and do nothing in a dark, quiet room for a few minutes right after hearing the [...]

Colossal: A Photo Series by Yoko Ishii Documents the Free-Ranging Urban Deer of Nara, Japan

From the series Beyond the Border by Yoko Ishii, all images courtesy of the photographer

In Nara, Japan, Sika deer are not restricted to forests or parks, but rather mingle in the urban center much like humans—congregating in green spaces, browsing open shops, and even lining up neatly to pass through turnstiles. Although viewed as a burden in a most of the country, in Nara the deer population is sacred and protected by law. Beyond the Border, an ongoing series by Kanagawa-based photographer Yoko Ishii, captures the deer in everyday moments across the city, from collectively passing down a major street, to pausing to feed their young below a stoplight.

Ishii was inspired to photograph the ways the animals interact with common city infrastructure after observing a pair of deer paused at an intersection in 2011, and especially loves photographing them while the city is at its most bare. “These picturesque moments when early in the morning the deer can be found standing in the middle of desolate intersections, not bound by man’s borders and laws, yet inhabiting a man-made city is fascinating and inspiring,” she explains in a statement about her series.

Beyond the Border explores how the animals exist outside of the basic rules and regulations strictly crafted for the city’s human population, instead living free amongst the many pavement markings and stoplights. Ishii published a book of her photography titled Dear Deer in 2015, and will be included in this year’s Auckland Festival of Photography in New Zealand from May 31 to June 16, 2019. You can see more of her recent work on her website and Facebook. (via Īgnant)

From the series Beyond the Boarder by Yoko Ishii, all images courtesy of the photographer

Open Culture: 9 Science-Fiction Authors Predict the Future: How Jules Verne, Isaac Asimov, William Gibson, Philip K. Dick & More Imagined the World Ahead

Pressed to give a four-word definition of science fiction, one could do worse than "stories about the future." That stark simplification does the complex and varied genre a disservice, as the defenders of science fiction against its critics won't hesitate to claim. And those critics are many, including most recently the writer Ian McEwan, despite the fact that his new novel Machines Like Me is about the introduction of intelligent androids into human society. Sci-fi fans have taken him to task for distancing his latest book from a genre he sees as insufficiently concerned with the "human dilemmas" imagined technologies might cause, but he has a point: set in an alternate 1982, Machines Like Me isn't about the future but the past.

Then again, perhaps McEwan's novel is about the future, and the androids simply haven't yet arrived on our own timeline — or perhaps, like most enduring works of science fiction, it's ultimately about the present moment. The writers in the sci-fi pantheon all combine a heightened awareness of the concerns of their own eras with a certain genuine prescience about things to come.

Writing in the early 1860s, Jules Verne imagined a suburbanized 20th century with gas-powered cars, electronic surveillance, fax machines and a population at once both highly educated and crudely entertained. Verne also included a simple communication system that can't help but remind us of the internet we use today — a system whose promise and peril Neuromancer author William Gibson described on television more than 130 years later.

In the list below we've rounded up Verne and Gibson's predictions about the future of technology and humanity along with those of seven other science-fiction luminaries. Despite coming from different generations and possessing different sensibilities, these writers share not just a concern with the future but the ability to express that concern in a way that still interests us, the denizens of that future. Or rather, something like that future: when we hear Aldous Huxley predict in 1950 that "during the next fifty years mankind will face three great problems: the problem of avoiding war; the problem of feeding and clothing a population of two and a quarter billions which, by 2000 A.D., will have grown to upward of three billions, and the problem of supplying these billions without ruining the planet’s irreplaceable resources," we can agree with the general picture even if he lowballed global population growth by half.

In 1964, Arthur C. Clarke predicted not just the internet but 3D printers and trained monkey servants. In 1977, the more dystopian-minded J.G. Ballard came up with something that sounds an awful lot like modern social media. Philip K. Dick's timeline of the years 1983 through 2012 includes Soviet satellite weapons, the displacement of oil as an energy source by hydrogen, and colonies both lunar and Martian. Envisioning the world of 2063, Robert Heinlein included interplanetary travel, the complete curing of cancer, tooth decay, and the common cold, and a permanent end to housing shortages. Even Mark Twain, despite not normally being regarded as a sci-fi writer, imagined a "'limitless-distance' telephone" system introduced and "the daily doings of the globe made visible to everybody, and audibly discussable too, by witnesses separated by any number of leagues."

As much as the hits impress, they tend to be outnumbered in even science fiction's greatest minds by the misses. But as you'll find while reading through the predictions of these nine writers, what separates science fiction's greatest minds from the rest is the ability to come up with not just interesting hits but interesting misses as well. Considering why they got right what they got right and why they got wrong what they got wrong tells us something about the workings of their imaginations, but also about the eras they did their imagining in — and how their times led to our own, the future to which so many of them dedicated so much thought.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

Related Content:

Read Hundreds of Free Sci-Fi Stories from Asimov, Lovecraft, Bradbury, Dick, Clarke & More

Free Science Fiction Classics on the Web: Huxley, Orwell, Asimov, Gaiman & Beyond

The Encyclopedia of Science Fiction: 17,500 Entries on All Things Sci-Fi Are Now Free Online

Isaac Asimov Recalls the Golden Age of Science Fiction (1937-1950)

The Art of Sci-Fi Book Covers: From the Fantastical 1920s to the Psychedelic 1960s & Beyond

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

9 Science-Fiction Authors Predict the Future: How Jules Verne, Isaac Asimov, William Gibson, Philip K. Dick & More Imagined the World Ahead is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Colossal: Long-Limbed Mythical Characters Carved from Hawthorn Wood by Tach Pollard

“Owlman Rising”

Sculptor Tach Pollard (previously) works with sustainably sourced hawthorn wood to form lustrous sculptures of mythological figures. After carving the wood, the UK-based artist finishes it with blow torches to form the dark bodies that contrast with the pale, peaceful faces on each sculptural figure. Pollard draws inspiration from myths and spiritual traditions from around the world, including Inuit and Celtic traditions, and is particularly drawn to the notions of shapeshifting and sea creatures. You can see more of his mystical sculptures on Instagram and peruse works available for purchase on Etsy.

“Mellisae Returns”

“Wind Walker”

“Sea Wolven”

“Fire Antler”

“Freya”

“Face Like The Sun II”

“Wolven Walking”

Colossal: Delight-Inducing Augmented Reality Videos by Vernon James Manlapaz Combine Everyday Scenery with Fantastical Interlopers

Everyday spaces like street markets, city sidewalks, and restaurants become fantastical playlands in the mind of Vernon James Manlapaz. The designer, who has several years of experience in animation and visual effects, creates delight-inducing augmented realities that he shares on Instagram with his more than 150,000 followers.

Manlapaz tells Colossal that his digital creations are a combination of pre-planned concepts and spontaneous inspiration. The designer always keeps his phone and 360 camera on hand so he can capture footage for scenery at any time. Manlapaz explains that he chooses to work with familiar objects and concepts that everyone can identify with as the basis for his wonder-inducing moments.

The content I make is always about bringing out that childlike wonder we all have. The goal has always been to bring joy and happiness to everyone who comes across my work. That even that 10 seconds they spend watching the content brings joy to them even for a couple of moments to their life.

Manlapaz was born and raised in Manila, Philippines. He now lives in Los Angeles where he works as a visual effects designer at Snap Inc., which you may know as Snapchat. Follow along with Manlapaz’s digital delights via Instagram. (via It’s Nice That)

Planet Haskell: Dominic Steinitz: Naperian Functors meet surface sea temperatures

Blog post on my new site.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Emma Webster

Emma Webster (previously featured here).

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster

 

 

Emma Webster’s Website

Emma Webster on Instagram

Emma Webster / Diane Rosenstein Gallery

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Photographer Spotlight: Ryan Walker

A selection of work by Toronto-based photographer and cinematographer Ryan Walker.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Ryan Walker’s Website

Ryan Walker on Instagram

Planet Haskell: Ken T Takusagawa: [nzfpggsk] Type annotations on operators

In Haskell, one can assert the type of an identifier in an expression by attaching a type annotation with "::".  For infix operators, one can attach such a type annotation by rewriting it in function notation: surround the operator with parentheses and move it from infix to prefix position:

three :: Int;
three = ((+)::Int -> Int -> Int) 1 2;

It seems impossible to attach a type annotation to an operator while keeping it in infix notation.  This is a bit problematic because a common use of an infix operator is to use many of them in series, e.g., 1 + 2 + 3 + 4, but it is awkward to rewrite a long series of uses of an infix operator into prefix notation.

ten = (+) ((+) ((+) 1 2) 3) 4

Tangentially, for addition, we could do foldl' (+) 0 [1, 2, 3, 4], but fold won't work with operators like (.), ($), or (>>=) which take operands of many different types within a series expression.  Previously: syntactic fold and fold (.) via Data.Dynamic.

The motivation was, if one is using a polymorphic function or operator, it may be pedagogically helpful to attach a type annotation at the point of use so the reader knows what "version" of a polymorphic function is being invoked.  The annotation gives the concrete types at the point of use, rather than the polymorphic type with type variables that library documentation will give you.

A nice (hypothetical) IDE could do type inference to figure out the type at the point of use, then display it as a popup.

Penny Arcade: News Post: Gravitas

Tycho: The whole point of Game of Thrones is that you thought one thing was gonna happen but then another thing happens, and that works until it doesn’t anymore. At least for me.  I got out at the end of the Fourth Season, Fourthmeal if you will, at what might have been the last episode but maybe it wasn’t and I don’t super care if it was.  There was a fight between two characters with a result I didn’t buy, with a result in the vein of “yeah, dummy, well, ha ha!  This is how it’s gonna go instead” and then rubbing your nose in it, and I…

Planet Haskell: Holden Karau: Powering Tensorflow with Big Data @ CERN Computing Seminar

Thanks for joining me on 2019-04-03 for Powering Tensorflow with Big Data.I'll update this post with the slides soon.Comment bellow to join in the discussion :).Talk feedback is appreciated at http://bit.ly/holdenTalkFeedback

The Shape of Code: OSI licenses: number and survival

There is a lot of source code available which is said to be open source. One definition of open source is software that has an associated open source license. Along with promoting open source, the Open Source Initiative (OSI) has a rigorous review process for open source licenses (so they say, I have no expertise in this area), and have become the major licensing brand in this area.

Analyzing the use of licenses in source files and packages has become a niche research topic. The majority of source files don’t contain any license information, and, depending on language, many packages don’t include a license either (see Understanding the Usage, Impact, and Adoption of Non-OSI Approved Licenses). There is some evolution in license usage, i.e., changes of license terms.

I knew that a fair-few open source licenses had been created, but how many, and how long have they been in use?

I don’t know of any other work in this area, and the fastest way to get lots of information on open source licenses was to scrape the brand leader’s licensing page, using the Wayback Machine to obtain historical data. Starting in mid-2007, the OSI licensing page kept to a fixed format, making automatic extraction possible (via an awk script); there were few pages archived for 2000, 2001, and 2002, and no pages available for 2003, 2004, or 2005 (if you have any OSI license lists for these years, please send me a copy).

What do I now know?

Over the years OSI have listed 110107 different open source licenses, and currently lists 81. The actual number of license names listed, since 2000, is 205; the ‘extra’ licenses are the result of naming differences, such as the use of dashes, inclusion of a bracketed acronym (or not), license vs License, etc.

Below is the Kaplan-Meier survival curve (with 95% confidence intervals) of licenses listed on the OSI licensing page (code+data):

Survival curve of OSI licenses.

How many license proposals have been submitted for review, but not been approved by OSI?

Patrick Masson, from the OSI, kindly replied to my query on number of license submissions. OSI doesn’t maintain a count, and what counts as a submission might be difficult to determine (OSI recently changed the review process to give a definitive rejection; they have also started providing a monthly review status). If any reader is keen, there is an archive of mailing list discussions on license submissions; trawling these would make a good thesis project :-)

Daniel Lemire's blog: Parsing short hexadecimal strings efficiently

It is common to represent binary data or numbers using the hexadecimal notation. Effectively, we use a base-16 representation where the first 10 digits are 0, 1, 2, 3, 5, 6, 7, 8, 9 and where the following digits are A, B, C, D, E, F, with the added complexity that we can use either lower or upper case (A or a).

We sometimes want to convert strings of hexadecimal characters into a numerical value. For simplicity, let us assume that we have sequences of four character. Each character is represented as a byte value using its corresponding ASCII code point. So ‘0’ becomes 48, ‘1’ is 49, ‘A’ is 65 and so forth.

The most efficient approach I have found is to simply rely on memoization. Build a 256-byte array where 48 (or ‘0’) is mapped to 0, 65 (or ‘A’) is mapped to 10 and so forth. As an extra feature, map all disallowed values to -1 so we can detect them. Then just lookup the four values and combine them.

uint32_t hex_to_u32_lookup(const uint8_t *src) {
  uint32_t v1 = digittoval[src[0]];
  uint32_t v2 = digittoval[src[1]];
  uint32_t v3 = digittoval[src[2]];
  uint32_t v4 = digittoval[src[3]];
  return v1 << 12 | v2 << 8 | v3 << 4 | v4;
}

What else could you do?

You could replace the table lookup with a fancy mathematical function:

uint32_t convertone(uint8_t c) {
  return (c & 0xF) + 9 * (c >> 6);
}

How do they compare? I implemented both of these and I find that the table lookup approach is more than twice as fast when the function is called frequently. I report the number of instructions and the number of cycles to parse 4-character sequences on a Skylake processor (code compiled with GNU GCC 8).

Instruction count Cycle count
lookup 18 4.3
math 38 9.6

I am still frustrated by the cost of this operation. Using 4 cycles to convert 4 characters to a number feels like too much of an expense.

My source code is available (run it under Linux).

Further reading: Fast hex number string to int by Johny Lee; Using PEXT to convert from hexadecimal ASCII to number by Mula.

Penny Arcade: Comic: Gravitas

New Comic: Gravitas

Michael Geist: My ADA Keynote: What the Canadian Experience Teaches About the Future of Copyright Reform

In late March of this year, I travelled to Canberra, Australia to deliver a keynote address at the Australian Digital Alliance’s 2019 Copyright Forum. The ADA is a leading voice on copyright issues in Australia and its annual Copyright Forum brings together government, creators, education, libraries, and the broader public to explore copyright issues. This year’s event included innovative film makers, the President of the Australian Society of Authors, European Parliament MEP Julia Reda, as well as leading academics, trade negotiators, government policy experts, and many others.

My talk focused on the Canadian copyright experience, using real data to dispel the misleading claims about the impact of Canada’s 2012 reforms. A video of the keynote has been posted to YouTube and is embedded below.

The post My ADA Keynote: What the Canadian Experience Teaches About the Future of Copyright Reform appeared first on Michael Geist.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Pablo Benzo

Pablo Benzo (previously featured here).

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo

 

 

Pablo Benzo’s Website

Pablo Benzo on Instagram

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Christopher Burk

Christopher Burk (previously featured here).

Christopher Burk

 

 

Christopher Burk

 

 

Christopher Burk

 

 

Christopher Burk

 

 

Christopher Burk

 

 

Christopher Burk

 

 

Christopher Burk’s Website

Christopher Burk on Instagram

Explosm.net: Comic for 2019.04.17

New Cyanide and Happiness Comic

Planet Haskell: Donnacha Oisín Kidney: Probability Monads in Cubical Agda

Posted on April 17, 2019
Tags: Agda, Probability

Cubical Agda has just come out, and I’ve been playing around with it for a bit. There’s a bunch of info out there on the theory of cubical types, and Homotopy Type Theory more generally (cubical type theory is kind of like an “implementation� of Homotopy type theory), but I wanted to make a post demonstrating cubical Agda in practice, and one of its cool uses from a programming perspective.

So What is Cubical Agda?

I don’t really know! Cubical type theory is quite complex (even for a type theory), and I’m not nearly qualified to properly explain it. In lieu of a proper first-principles explanation, then, I’ll try and give a few examples of how it differs from normal Agda, before moving on to the main example of this post.

Imports
{-# OPTIONS --cubical #-}

open import ProbabilityModule.Semirings

module ProbabilityModule.Monad {s} (rng : Semiring s) where

open import Cubical.Core.Everything
open import Cubical.Relation.Everything
open import Cubical.Foundations.Prelude hiding (_≡⟨_⟩_) renaming (_∙_ to _;_)
open import Cubical.HITs.SetTruncation
open import ProbabilityModule.Utils
Extensionality
One of the big annoyances in standard Agda is that we can’t prove the following:
extensionality : ∀ {f g : A → B}
           → (∀ x → f x ≡ g x)
           → f ≡ g
It’s emblematic of a wider problem in Agda: we can’t say “two things are equal if they always behave the same�. Infinite types, for instance (like streams) are often only equal via bisimulation: we can’t translate this into normal equality in standard Agda. Cubical type theory, though, has a different notion of “equality�, which allow a wide variety of things (including bisimulations and extensional proofs) to be translated into a proper equality
extensionality = funExt
Isomorphisms
One of these such things we can promote to a “proper equality� is an isomorphism. In the cubical repo this is used to prove things about binary numbers: by proving that there’s an isomorphism between the Peano numbers and binary numbers, they can lift any properties on the Peano numbers to the binary numbers.

So those are two useful examples, but the most interesting use I’ve seen so far is the following:

Higher Inductive Types

Higher Inductive Types are an extension of normal inductive types, like the list:
module NormalList where
 data List {a} (A : Set a) : Set a where
   [] : List A
   _∷_ : A → List A → List A

They allow us to add new equations to a type, as well as constructors. To demonstrate what this means, as well as why you’d want it, I’m going to talk about free objects.

Very informally, a free object on some algebra is the minimal type which satisfies the laws of the algebra. Lists, for instance, are the free monoid. They satisfy all of the monoid laws (<semantics>•<annotation encoding="application/x-tex">\bullet</annotation></semantics> is ++ and <semantics>ϵ<annotation encoding="application/x-tex">\epsilon</annotation></semantics> is []):

<semantics>(x•y)•z=x•(y•z)<annotation encoding="application/x-tex">(x \bullet y) \bullet z = x \bullet (y \bullet z)</annotation></semantics> <semantics>x•ϵ=x<annotation encoding="application/x-tex">x \bullet \epsilon = x</annotation></semantics> <semantics>ϵ•x=x<annotation encoding="application/x-tex">\epsilon \bullet x = x</annotation></semantics>

But nothing else. That means they don’t satisfy any extra laws (like, for example, commutativity), and they don’t have any extra structure they don’t need.

How did we get to the definition of lists from the monoid laws, though? It doesn’t look anything like them. It would be nice if there was some systematic way to construct the corresponding free object given the laws of an algebra. Unfortunately, in normal Agda, this isn’t possible. Consider, for instance, if we added the commutativity law to the algebra: <semantics>x•y=y•x<annotation encoding="application/x-tex">x \bullet y = y \bullet x</annotation></semantics> Not only is it not obvious how we’d write the corresponding free object, it’s actually not possible in normal Agda!

This kind of problem comes up a lot: we have a type, and we want it to obey just one more equation, but there is no inductive type which does so. Higher Inductive Types solve the problem in quite a straightforward way. So we want lists to satisfy another equation? Well, just add it to the definition!

module OddList where
 mutual
  data List {a} (A : Set a) : Set a where
    [] : List A
    _∷_ : A → List A → List A
    comm : ∀ xs ys → xs ++ ys ≡ ys ++ xs

  postulate _++_ : List A → List A → List A
Now, when we write a function that processes lists, Agda will check that the function behaves the same on xs ++ ys and ys ++ xs. As an example, here’s how you might define the free monoid as a HIT:
data FreeMonoid {a} (A : Set a) : Set a where
  [_] : A → FreeMonoid A
  _∙_ : FreeMonoid A → FreeMonoid A → FreeMonoid A
  ε : FreeMonoid A
  ∙ε : ∀ x → x ∙ ε ≡ x
  ε∙ : ∀ x → ε ∙ x ≡ x
  assoc : ∀ x y z → (x ∙ y) ∙ z ≡ x ∙ (y ∙ z)

It’s quite a satisfying definition, and very easy to see how we got to it from the monoid laws.

Now, when we write functions, we have to prove that those functions themselves also obey the monoid laws. For instance, here’s how we would take the length:
module Length where
  open import ProbabilityModule.Semirings.Nat
  open Semiring +-*-�

  length : FreeMonoid A → ℕ
  length [ x ] = 1
  length (xs ∙ ys) = length xs + length ys
  length ε = 0
  length (∙ε xs i) = +0 (length xs) i
  length (ε∙ xs i) = 0+ (length xs) i
  length (assoc xs ys zs i) = +-assoc (length xs) (length ys) (length zs) i

The first three clauses are the actual function: they deal with the three normal constructors of the type. The next three clauses prove that those previous clauses obey the equalities defined on the type.

With the preliminary stuff out of the way, let’s get on to the type I wanted to talk about:

Probability

First things first, let’s remember the classic definition of the probability monad:

Definitionally speaking, this doesn’t really represent what we’re talking about. For instance, the following two things express the same distribution, but have different representations:

So it’s the perfect candidate for an extra equality clause like we had above.

Second, in an effort to generalise, we won’t deal specifically with Rational, and instead we’ll use any semiring. After all of that, we get the following definition:

open Semiring rng

module Initial where
 infixr 5 _&_∷_
 data � (A : Set a) : Set (a ⊔ s) where
   []  : � A
   _&_∷_ : (p : R) → (x : A) → � A → � A
   dup : ∀ p q x xs → p & x ∷ q & x ∷ xs ≡ p + q & x ∷ xs
   com : ∀ p x q y xs → p & x ∷ q & y ∷ xs ≡ q & y ∷ p & x ∷ xs
   del : ∀ x xs → 0# & x ∷ xs ≡ xs

The three extra conditions are pretty sensible: the first removes duplicates, the second makes things commutative, and the third removes impossible events.

Let’s get to writing some functions, then:

 ∫ : (A → R) → � A → R
 ∫ f [] = 0#
 ∫ f (p & x ∷ xs) = p * f x + ∫ f xs
 ∫ f (dup p q x xs i) = begin[ i ]
   p * f x + (q * f x + ∫ f xs) ≡˘⟨ +-assoc (p * f x) (q * f x) (∫ f xs) ⟩
   (p * f x + q * f x) + ∫ f xs ≡˘⟨ cong (_+ ∫ f xs) (⟨+⟩* p q (f x))  ⟩
   (p + q) * f x + ∫ f xs �
 ∫ f (com p x q y xs i) = begin[ i ]
   p * f x + (q * f y + ∫ f xs) ≡˘⟨ +-assoc (p * f x) (q * f y) (∫ f xs) ⟩
   p * f x + q * f y + ∫ f xs   ≡⟨ cong (_+ ∫ f xs) (+-comm (p * f x) (q * f y)) ⟩
   q * f y + p * f x + ∫ f xs   ≡⟨ +-assoc (q * f y) (p * f x) (∫ f xs) ⟩
   q * f y + (p * f x + ∫ f xs) �
 ∫ f (del x xs i) = begin[ i ]
   0# * f x + ∫ f xs ≡⟨ cong (_+ ∫ f xs) (0* (f x)) ⟩
   0# + ∫ f xs       ≡⟨ 0+ (∫ f xs) ⟩
   ∫ f xs �

This is much more involved than the free monoid function, but the principle is the same: we first write the actual function (on the first three lines), and then we show that the function doesn’t care about the “rewrite rules� we have in the next three clauses.

Before going any further, we will have to amend the definition a little. The problem is that if we tried to prove something about any function on our � type, we’d have to prove equalities between equalities as well. I’m sure that this is possible, but it’s very annoying, so I’m going to use a technique I saw in this repository. We add another rule to our type, stating that all equalities on the type are themselves equal. The new definition looks like this:

infixr 5 _&_∷_
data � (A : Set a) : Set (a ⊔ s) where
  []  : � A
  _&_∷_ : (p : R) → (x : A) → � A → � A
  dup : ∀ p q x xs → p & x ∷ q & x ∷ xs ≡ p + q & x ∷ xs
  com : ∀ p x q y xs → p & x ∷ q & y ∷ xs ≡ q & y ∷ p & x ∷ xs
  del : ∀ x xs → 0# & x ∷ xs ≡ xs
  trunc : isSet (� A)

Eliminators

Unfortunately, after adding that case we have to deal with it explicitly in every pattern-match on �. We can get around it by writing an eliminator for the type which deals with it itself. Eliminators are often irritating to work with, though: we give up the nice pattern-matching syntax we get when we program directly. It’s a bit like having to rely on church encoding everywhere.

However, we can get back some pattern-like syntax if we use copatterns. Here’s an example of what I mean, for folds on lists:

module ListElim where
 open NormalList
 open import ProbabilityModule.Semirings.Nat
 open Semiring +-*-� renaming (_+_ to _ℕ+_)

 record [_↦_] (A : Set a) (B : Set b) : Set (a ⊔ b) where
   field
     [_][] : B
     [_]_∷_ : A → B → B
   [_]↓ : List A → B
   [ [] ]↓ = [_][]
   [ x ∷ xs ]↓ = [_]_∷_ x [ xs ]↓
 open [_↦_]
 
 sum-alg : [ ℕ ↦ ℕ ]
 [ sum-alg ][] = 0
 [ sum-alg ] x ∷ xs = x ℕ+ xs
 
 sum : List ℕ → ℕ
 sum = [ sum-alg ]↓

For the probability monad, there’s an eliminator for the whole thing, and eliminator for propositional proofs, and a normal eliminator for folding. Their definitions are quite long, but mechanical.

Eliminator Definitions
record ⟅_�_⟆ {a ℓ} (A : Set a) (P : � A → Set ℓ) : Set (a ⊔ ℓ ⊔ s) where
  constructor elim
  field
    ⟅_⟆-set : ∀ {xs} → isSet (P xs)
    ⟅_⟆[] : P []
    ⟅_⟆_&_∷_ : ∀ p x xs → P xs → P (p & x ∷ xs)
  private z = ⟅_⟆[]; f = ⟅_⟆_&_∷_
  field
    ⟅_⟆-dup : (∀ p q x xs pxs → PathP (λ i → P (dup p q x xs i))
              (f p x (q & x ∷ xs) (f q x xs pxs)) (f (p + q) x xs pxs))
    ⟅_⟆-com : (∀ p x q y xs pxs → PathP (λ i → P (com p x q y xs i))
              (f p x (q & y ∷ xs) (f q y xs pxs)) (f q y (p & x ∷ xs) (f p x xs pxs)))
    ⟅_⟆-del : (∀ x xs pxs → PathP (λ i → P (del x xs i))
              (f 0# x xs pxs) pxs)
  ⟅_⟆⇓ : (xs : � A) → P xs
  ⟅ [] ⟆⇓ = z
  ⟅ p & x ∷ xs ⟆⇓ = f p x xs ⟅ xs ⟆⇓
  ⟅ dup p q x xs i ⟆⇓ = ⟅_⟆-dup p q x xs ⟅ xs ⟆⇓ i
  ⟅ com p x q y xs i ⟆⇓ = ⟅_⟆-com p x q y xs ⟅ xs ⟆⇓ i
  ⟅ del x xs i ⟆⇓ = ⟅_⟆-del x xs ⟅ xs ⟆⇓ i
  ⟅ trunc xs ys p q i j ⟆⇓ =
    elimSquash₀ (λ xs → ⟅_⟆-set {xs}) (trunc xs ys p q) ⟅ xs ⟆⇓ ⟅ ys ⟆⇓ (cong ⟅_⟆⇓ p) (cong ⟅_⟆⇓ q) i j

open ⟅_�_⟆ public
elim-syntax : ∀ {a ℓ} → (A : Set a) → (� A → Set ℓ) → Set (a ⊔ ℓ ⊔ s)
elim-syntax = ⟅_�_⟆

syntax elim-syntax A (λ xs → Pxs) = [ xs ∈� A � Pxs ]

record ⟦_⇒_⟧ {a ℓ} (A : Set a) (P : � A → Set ℓ) : Set (a ⊔ ℓ ⊔ s) where
  constructor elim-prop
  field
    ⟦_⟧-prop : ∀ {xs} → isProp (P xs)
    ⟦_⟧[] : P []
    ⟦_⟧_&_∷_⟨_⟩ : ∀ p x xs → P xs → P (p & x ∷ xs)
  private z = ⟦_⟧[]; f = ⟦_⟧_&_∷_⟨_⟩
  ⟦_⟧⇑ = elim
          (isProp→isSet ⟦_⟧-prop)
          z f
          (λ p q x xs pxs →
            toPathP (⟦_⟧-prop (transp (λ i → P (dup p q x xs i))
            i0
            (f p x (q & x ∷ xs) (f q x xs pxs))) (f (p + q) x xs pxs) ))
          (λ p x q y xs pxs → toPathP (⟦_⟧-prop (transp (λ i → P (com p x q y xs i)) i0
            (f p x (q & y ∷ xs) (f q y xs pxs))) (f q y (p & x ∷ xs) (f p x xs pxs))))
            λ x xs pxs → toPathP (⟦_⟧-prop (transp (λ i → P (del x xs i)) i0 ((f 0# x xs pxs))) pxs)
  ⟦_⟧⇓ = ⟅ ⟦_⟧⇑ ⟆⇓

open ⟦_⇒_⟧ public
elim-prop-syntax : ∀ {a ℓ} → (A : Set a) → (� A → Set ℓ) → Set (a ⊔ ℓ ⊔ s)
elim-prop-syntax = ⟦_⇒_⟧

syntax elim-prop-syntax A (λ xs → Pxs) = ⟦ xs ∈� A ⇒ Pxs ⟧

record [_↦_] {a b} (A : Set a) (B : Set b) : Set (a ⊔ b ⊔ s) where
  constructor rec
  field
    [_]-set  : isSet B
    [_]_&_∷_ : R → A → B → B
    [_][]    : B
  private f = [_]_&_∷_; z = [_][]
  field
    [_]-dup  : ∀ p q x xs → f p x (f q x xs) ≡ f (p + q) x xs
    [_]-com : ∀ p x q y xs → f p x (f q y xs) ≡ f q y (f p x xs)
    [_]-del : ∀ x xs → f 0# x xs ≡ xs
  [_]⇑ = elim
            [_]-set
            z
            (λ p x _ xs → f p x xs)
            (λ p q x xs → [_]-dup p q x)
            (λ p x q y xs → [_]-com p x q y)
            (λ x xs → [_]-del x)
  [_]↓ = ⟅ [_]⇑ ⟆⇓
open [_↦_] public

Here’s one in action, to define map:

map : (A → B) → � A → � B
map = λ f → [ map′ f ]↓
  module Map where
  map′ : (A → B) → [ A ↦ � B ]
  [ map′ f ] p & x ∷ xs = p & f x ∷ xs
  [ map′ f ][] = []
  [ map′ f ]-set = trunc
  [ map′ f ]-dup p q x xs = dup p q (f x) xs
  [ map′ f ]-com p x q y xs = com p (f x) q (f y) xs
  [ map′ f ]-del x xs = del (f x) xs

And here’s how we’d define union, and then prove that it’s associative:

infixr 5 _∪_
_∪_ : � A → � A → � A
_∪_ = λ xs ys → [ union ys ]↓ xs
  module Union where
  union : � A → [ A ↦ � A ]
  [ union ys ]-set = trunc
  [ union ys ] p & x ∷ xs = p & x ∷ xs
  [ union ys ][] = ys
  [ union ys ]-dup = dup
  [ union ys ]-com = com
  [ union ys ]-del = del

∪-assoc : (xs ys zs : � A) → xs ∪ (ys ∪ zs) ≡ (xs ∪ ys) ∪ zs
∪-assoc = λ xs ys zs → ⟦ ∪-assoc′ ys zs ⟧⇓ xs
  module UAssoc where
  ∪-assoc′ : ∀ ys zs → ⟦ xs ∈� A ⇒ xs ∪ (ys ∪ zs) ≡ (xs ∪ ys) ∪ zs ⟧
  ⟦ ∪-assoc′ ys zs ⟧-prop = trunc _ _
  ⟦ ∪-assoc′ ys zs ⟧[] = refl
  ⟦ ∪-assoc′ ys zs ⟧ p & x ∷ xs ⟨ P ⟩ = cong (p & x ∷_) P

There’s a lot more stuff here that I won’t bore you with.

Boring Stuff
infixl 7 _â‹Š_
_⋊_ : R → � A → � A
_⋊_ = λ p → [ p ⋊′ ]↓
  module Cond where
  _⋊′ : R → [ A ↦ � A ]
  [ p ⋊′ ]-set = trunc
  [ p ⋊′ ][] = []
  [ p ⋊′ ] q & x ∷ xs = p * q & x ∷ xs
  [ p ⋊′ ]-com q x r y xs = com (p * q) x (p * r) y xs
  [ p ⋊′ ]-dup q r x xs =
    p * q & x ∷ p * r & x ∷ xs ≡⟨ dup (p * q) (p * r) x xs ⟩
    p * q + p * r & x ∷ xs     ≡˘⟨ cong (_& x ∷ xs) (*⟨+⟩ p q r) ⟩
    p * (q + r) & x ∷ xs       �
  [ p ⋊′ ]-del x xs =
    p * 0# & x ∷ xs ≡⟨ cong (_& x ∷ xs) (*0 p) ⟩
    0# & x ∷ xs     ≡⟨ del x xs ⟩
    xs              �

∫ : (A → R) → � A → R
∫ = λ f → [ ∫′ f ]↓
  module Expect where
  ∫′ : (A → R) → [ A ↦ R ]
  [ ∫′ f ]-set = sIsSet
  [ ∫′ f ] p & x ∷ xs = p * f x + xs
  [ ∫′ f ][] = 0#
  [ ∫′ f ]-dup p q x xs =
    p * f x + (q * f x + xs) ≡˘⟨ +-assoc (p * f x) (q * f x) xs ⟩
    (p * f x + q * f x) + xs ≡˘⟨ cong (_+ xs) (⟨+⟩* p q (f x)) ⟩
    (p + q) * f x + xs �
  [ ∫′ f ]-com p x q y xs =
    p * f x + (q * f y + xs) ≡˘⟨ +-assoc (p * f x) (q * f y) (xs) ⟩
    p * f x + q * f y + xs   ≡⟨ cong (_+ xs) (+-comm (p * f x) (q * f y)) ⟩
    q * f y + p * f x + xs   ≡⟨ +-assoc (q * f y) (p * f x) (xs) ⟩
    q * f y + (p * f x + xs) �
  [ ∫′ f ]-del x xs =
    0# * f x + xs ≡⟨ cong (_+ xs) (0* (f x)) ⟩
    0# + xs       ≡⟨ 0+ (xs) ⟩
    xs            �

syntax ∫ (λ x → e) = ∫ e � x

pure : A → � A
pure x = 1# & x ∷ []

∪-cons : ∀ p (x : A) xs ys → xs ∪ p & x ∷ ys ≡ p & x ∷ xs ∪ ys
∪-cons = λ p x xs ys → ⟦ ∪-cons′ p x ys ⟧⇓ xs
  module UCons where
  ∪-cons′ : ∀ p x ys → ⟦ xs ∈� A ⇒ xs ∪ p & x ∷ ys ≡ p & x ∷ xs ∪ ys ⟧
  ⟦ ∪-cons′ p x ys ⟧-prop = trunc _ _
  ⟦ ∪-cons′ p x ys ⟧[] = refl
  ⟦ ∪-cons′ p x ys ⟧ r & y ∷ xs ⟨ P ⟩ = cong (r & y ∷_) P ; com r y p x (xs ∪ ys)

⋊-distribʳ : ∀ p q → (xs : � A) → p ⋊ xs ∪ q ⋊ xs ≡ (p + q) ⋊ xs
⋊-distribʳ = λ p q → ⟦ ⋊-distribʳ′ p q ⟧⇓
  module JDistrib where
  ⋊-distribʳ′ : ∀ p q → ⟦ xs ∈� A ⇒ p ⋊ xs ∪ q ⋊ xs ≡ (p + q) ⋊ xs ⟧
  ⟦ ⋊-distribʳ′ p q ⟧-prop = trunc _ _
  ⟦ ⋊-distribʳ′ p q ⟧[] = refl
  ⟦ ⋊-distribʳ′ p q ⟧ r & x ∷ xs ⟨ P ⟩ =
    p ⋊ (r & x ∷ xs) ∪ q ⋊ (r & x ∷ xs)   ≡⟨ ∪-cons (q * r) x (p ⋊ (r & x ∷ xs)) (q ⋊ xs)  ⟩
    q * r & x ∷ p ⋊ (r & x ∷ xs) ∪ q ⋊ xs ≡⟨ cong (_∪ q ⋊ xs) (dup (q * r) (p * r) x (p ⋊ xs)) ⟩
    q * r + p * r & x ∷ p ⋊ xs ∪ q ⋊ xs   ≡˘⟨ cong (_& x ∷ (p ⋊ xs ∪ q ⋊ xs)) (⟨+⟩* q p r) ⟩
    (q + p) * r & x ∷ p ⋊ xs ∪ q ⋊ xs     ≡⟨ cong ((q + p) * r & x ∷_) P ⟩
    (q + p) * r & x ∷ (p + q) ⋊ xs        ≡⟨ cong (λ pq → pq * r & x ∷ (p + q) ⋊ xs) (+-comm q p) ⟩
    (p + q) * r & x ∷ (p + q) ⋊ xs        ≡⟨⟩
    _⋊_ (p + q) (r & x ∷ xs) �

⋊-distribˡ : ∀ p → (xs ys : � A) → p ⋊ xs ∪ p ⋊ ys ≡ p ⋊ (xs ∪ ys)
⋊-distribˡ = λ p xs ys → ⟦ ⋊-distribˡ′ p ys ⟧⇓ xs
  module JDistribL where
  ⋊-distribˡ′ : ∀ p ys → ⟦ xs ∈� A ⇒ p ⋊ xs ∪ p ⋊ ys ≡ p ⋊ (xs ∪ ys) ⟧
  ⟦ ⋊-distribˡ′ p ys ⟧-prop = trunc _ _
  ⟦ ⋊-distribˡ′ p ys ⟧[] = refl
  ⟦ ⋊-distribˡ′ p ys ⟧ q & x ∷ xs ⟨ P ⟩ =
    p ⋊ (q & x ∷ xs) ∪ p ⋊ ys ≡⟨⟩
    p * q & x ∷ p ⋊ xs ∪ p ⋊ ys ≡⟨ cong (p * q & x ∷_) P ⟩
    p * q & x ∷ p ⋊ (xs ∪ ys) ≡⟨⟩
    p ⋊ ((q & x ∷ xs) ∪ ys) �


∪-idʳ : (xs : � A) → xs ∪ [] ≡ xs
∪-idʳ = ⟦ ∪-idʳ′ ⟧⇓
  module UIdR where
  ∪-idʳ′ : ⟦ xs ∈� A ⇒ xs ∪ [] ≡ xs ⟧
  ⟦ ∪-idʳ′ ⟧-prop = trunc _ _
  ⟦ ∪-idʳ′ ⟧[] = refl
  ⟦ ∪-idʳ′ ⟧ p & x ∷ xs ⟨ P ⟩ = cong (p & x ∷_) P

∪-comm : (xs ys : � A) → xs ∪ ys ≡ ys ∪ xs
∪-comm = λ xs ys → ⟦ ∪-comm′ ys ⟧⇓ xs
  module UComm where
  ∪-comm′ : ∀ ys → ⟦ xs ∈� A ⇒ xs ∪ ys ≡ ys ∪ xs ⟧
  ⟦ ∪-comm′ ys ⟧-prop = trunc _ _
  ⟦ ∪-comm′ ys ⟧[] = sym (∪-idʳ ys)
  ⟦ ∪-comm′ ys ⟧ p & x ∷ xs ⟨ P ⟩ = cong (p & x ∷_) P ; sym (∪-cons p x ys xs)

0⋊ : (xs : � A) → 0# ⋊ xs ≡ []
0⋊ = ⟦ 0⋊′ ⟧⇓
  module ZeroJ where
  0⋊′ : ⟦ xs ∈� A ⇒ 0# ⋊ xs ≡ [] ⟧
  ⟦ 0⋊′ ⟧-prop = trunc _ _
  ⟦ 0⋊′ ⟧[] = refl
  ⟦ 0⋊′ ⟧ p & x ∷ xs ⟨ P ⟩ =
    0# ⋊ (p & x ∷ xs)    ≡⟨⟩
    0# * p & x ∷ 0# ⋊ xs ≡⟨ cong (_& x ∷ 0# ⋊ xs) (0* p) ⟩
    0# & x ∷ 0# ⋊ xs     ≡⟨ del x (0# ⋊ xs) ⟩
    0# ⋊ xs              ≡⟨ P ⟩
    [] �

However, I can demonstrate the monadic bind:

_>>=_ : � A → (A → � B) → � B
xs >>= f = [ f =<< ]↓ xs
  module Bind where
  _=<< : (A → � B) → [ A ↦ � B ]
  [ f =<< ] p & x ∷ xs = p ⋊ (f x) ∪ xs
  [ f =<< ][] = []
  [ f =<< ]-set = trunc
  [ f =<< ]-del x xs = cong (_∪ xs) (0⋊ (f x))
  [ f =<< ]-dup p q x xs =
    p ⋊ (f x) ∪ q ⋊ (f x) ∪ xs   ≡⟨ ∪-assoc (p ⋊ f x) (q ⋊ f x) xs ⟩
    (p ⋊ (f x) ∪ q ⋊ (f x)) ∪ xs ≡⟨ cong (_∪ xs) (⋊-distribʳ p q (f x) ) ⟩
    _⋊_ (p + q) (f x) ∪ xs �
  [ f =<< ]-com p x q y xs =
    p ⋊ (f x) ∪ q ⋊ (f y) ∪ xs   ≡⟨ ∪-assoc (p ⋊ f x) (q ⋊ f y) xs ⟩
    (p ⋊ (f x) ∪ q ⋊ (f y)) ∪ xs ≡⟨ cong (_∪ xs) (∪-comm (p ⋊ f x) (q ⋊ f y)) ⟩
    (q ⋊ (f y) ∪ p ⋊ (f x)) ∪ xs ≡˘⟨ ∪-assoc (q ⋊ f y) (p ⋊ f x) xs ⟩
    q ⋊ (f y) ∪ p ⋊ (f x) ∪ xs �

And we can prove the monad laws, also:

Proofs of Monad Laws

1⋊ : (xs : � A) → 1# ⋊ xs ≡ xs
1⋊ = ⟦ 1⋊′ ⟧⇓
  module OneJoin where
  1⋊′ : ⟦ xs ∈� A ⇒ 1# ⋊ xs ≡ xs ⟧
  ⟦ 1⋊′ ⟧-prop = trunc _ _
  ⟦ 1⋊′ ⟧[] = refl
  ⟦ 1⋊′ ⟧ p & x ∷ xs ⟨ P ⟩ =
    1# ⋊ (p & x ∷ xs) ≡⟨⟩
    1# * p & x ∷ 1# ⋊ xs ≡⟨ cong (_& x ∷ 1# ⋊ xs) (1* p) ⟩
    p & x ∷ 1# ⋊ xs ≡⟨ cong (p & x ∷_) P ⟩
    p & x ∷ xs �

>>=-distrib : (xs ys : � A) (g : A → � B) → (xs ∪ ys) >>= g ≡ (xs >>= g) ∪ (ys >>= g)
>>=-distrib = λ xs ys g → ⟦ >>=-distrib′ ys g ⟧⇓ xs
  module BindDistrib where
  >>=-distrib′ : (ys : � A) (g : A → � B) → ⟦ xs ∈� A ⇒ ((xs ∪ ys) >>= g) ≡ (xs >>= g) ∪ (ys >>= g) ⟧
  ⟦ >>=-distrib′ ys g ⟧-prop = trunc _ _
  ⟦ >>=-distrib′ ys g ⟧[] = refl
  ⟦ >>=-distrib′ ys g ⟧ p & x ∷ xs ⟨ P ⟩ =
    (((p & x ∷ xs) ∪ ys) >>= g) ≡⟨⟩
    (p & x ∷ xs ∪ ys) >>= g ≡⟨⟩
    p ⋊ g x ∪ ((xs ∪ ys) >>= g) ≡⟨ cong (p ⋊ g x ∪_) P ⟩
    p ⋊ g x ∪ ((xs >>= g) ∪ (ys >>= g)) ≡⟨ ∪-assoc (p ⋊ g x) (xs >>= g) (ys >>= g) ⟩
    (p ⋊ g x ∪ (xs >>= g)) ∪ (ys >>= g) ≡⟨⟩
    ((p & x ∷ xs) >>= g) ∪ (ys >>= g) �

*-assoc-⋊ : ∀ p q (xs : � A) → (p * q) ⋊ xs ≡ p ⋊ (q ⋊ xs)
*-assoc-⋊ = λ p q → ⟦ *-assoc-⋊′ p q ⟧⇓
  module MAssocJ where
  *-assoc-⋊′ : ∀ p q → ⟦ xs ∈� A ⇒ (p * q) ⋊ xs ≡ p ⋊ (q ⋊ xs) ⟧
  ⟦ *-assoc-⋊′ p q ⟧-prop = trunc _ _
  ⟦ *-assoc-⋊′ p q ⟧[] = refl
  ⟦ *-assoc-⋊′ p q ⟧ r & x ∷ xs ⟨ P ⟩ =
    p * q ⋊ (r & x ∷ xs) ≡⟨⟩
    p * q * r & x ∷ (p * q ⋊ xs) ≡⟨ cong (_& x ∷ (p * q ⋊ xs)) (*-assoc p q r) ⟩
    p * (q * r) & x ∷ (p * q ⋊ xs) ≡⟨ cong (p * (q * r) & x ∷_) P ⟩
    p * (q * r) & x ∷ (p ⋊ (q ⋊ xs)) ≡⟨⟩
    p ⋊ (q ⋊ (r & x ∷ xs)) �

⋊-assoc->>= : ∀ p (xs : � A) (f : A → � B) → (p ⋊ xs) >>= f ≡ p ⋊ (xs >>= f)
⋊-assoc->>= = λ p xs f → ⟦ ⋊-assoc->>=′ p f ⟧⇓ xs
  module JDistribB where
  ⋊-assoc->>=′ : ∀ p (f : A → � B) → ⟦ xs ∈� A ⇒ (p ⋊ xs) >>= f ≡ p ⋊ (xs >>= f) ⟧
  ⟦ ⋊-assoc->>=′ p f ⟧-prop = trunc _ _
  ⟦ ⋊-assoc->>=′ p f ⟧[] = refl
  ⟦ ⋊-assoc->>=′ p f ⟧ q & x ∷ xs ⟨ P ⟩ =
    (p ⋊ (q & x ∷ xs)) >>= f ≡⟨⟩
    (p * q & x ∷ p ⋊ xs) >>= f ≡⟨⟩
    ((p * q) ⋊ f x) ∪ ((p ⋊ xs) >>= f) ≡⟨ cong (((p * q) ⋊ f x) ∪_) P ⟩
    ((p * q) ⋊ f x) ∪ (p ⋊ (xs >>= f)) ≡⟨ cong (_∪ (p ⋊ (xs >>= f))) (*-assoc-⋊ p q (f x)) ⟩
    (p ⋊ (q ⋊ f x)) ∪ (p ⋊ (xs >>= f)) ≡⟨ ⋊-distribˡ p (q ⋊ f x) (xs >>= f) ⟩
    p ⋊ ((q & x ∷ xs) >>= f) �

>>=-idˡ : (x : A) → (f : A → � B)
      → (pure x >>= f) ≡ f x
>>=-idˡ x f =
  pure x >>= f ≡⟨⟩
  (1# & x ∷ []) >>= f ≡⟨⟩
  1# ⋊ f x ∪ [] >>= f ≡⟨⟩
  1# ⋊ f x ∪ [] ≡⟨ ∪-idʳ (1# ⋊ f x) ⟩
  1# ⋊ f x ≡⟨ 1⋊ (f x) ⟩
  f x �

>>=-idʳ : (xs : � A) → xs >>= pure ≡ xs
>>=-idʳ = ⟦ >>=-idʳ′ ⟧⇓
  module Law1 where
  >>=-idʳ′ : ⟦ xs ∈� A ⇒ xs >>= pure ≡ xs ⟧
  ⟦ >>=-idʳ′ ⟧-prop = trunc _ _
  ⟦ >>=-idʳ′ ⟧[] = refl
  ⟦ >>=-idʳ′ ⟧ p & x ∷ xs ⟨ P ⟩ =
    ((p & x ∷ xs) >>= pure) ≡⟨⟩
    p ⋊ (pure x) ∪ (xs >>= pure) ≡⟨⟩
    p ⋊ (1# & x ∷ []) ∪ (xs >>= pure) ≡⟨⟩
    p * 1# & x ∷ [] ∪ (xs >>= pure) ≡⟨⟩
    p * 1# & x ∷ (xs >>= pure) ≡⟨ cong (_& x ∷ (xs >>= pure)) (*1 p) ⟩
    p & x ∷ xs >>= pure ≡⟨ cong (p & x ∷_) P ⟩
    p & x ∷ xs �

>>=-assoc : (xs : � A) → (f : A → � B) → (g : B → � C)
      → ((xs >>= f) >>= g) ≡ xs >>= (λ x → f x >>= g)
>>=-assoc = λ xs f g → ⟦ >>=-assoc′ f g ⟧⇓ xs
  module Law3 where
  >>=-assoc′ : (f : A → � B) → (g : B → � C) → ⟦ xs ∈� A ⇒ ((xs >>= f) >>= g) ≡ xs >>= (λ x → f x >>= g) ⟧
  ⟦ >>=-assoc′ f g ⟧-prop = trunc _ _
  ⟦ >>=-assoc′ f g ⟧[] = refl
  ⟦ >>=-assoc′ f g ⟧ p & x ∷ xs ⟨ P ⟩ =
    (((p & x ∷ xs) >>= f) >>= g) ≡⟨⟩
    ((p ⋊ f x ∪ (xs >>= f)) >>= g) ≡⟨ >>=-distrib (p ⋊ f x) (xs >>= f) g ⟩
    ((p ⋊ f x) >>= g) ∪ ((xs >>= f) >>= g) ≡⟨ cong ((p ⋊ f x) >>= g ∪_) P ⟩
    ((p ⋊ f x) >>= g) ∪ (xs >>= (λ y → f y >>= g)) ≡⟨ cong (_∪ (xs >>= (λ y → f y >>= g))) (⋊-assoc->>= p (f x) g) ⟩
    p ⋊ (f x >>= g) ∪ (xs >>= (λ y → f y >>= g)) ≡⟨⟩
    ((p & x ∷ xs) >>= (λ y → f y >>= g)) �

Conclusion

I’ve really enjoyed working with cubical Agda so far, and the proofs above were a pleasure to write. I think I can use the above definition to get a workable differential privacy monad, also.

Anyway, all the code is available here.

OCaml Planet: Learning ML Depth-First

If you haven’t heard of it, Depth First Learning is a wonderful resource for learning about machine learning.

Planet Haskell: Neil Mitchell: Code Statistics and Measuring Contributions

Summary: The only way to understand a code base is to ask someone who works on it.

This weekend a relative asked me how can we tell who wrote the code behind the black hole image, and was interested in the stats available on GitHub. There are lots of available stats, but almost every stat can be misleading in some circumstances. The only people who have the context to interpret the stats are those who work on the project, hence my default approach to assessing a project is to ask someone who works on it, with the understanding that they may look at relevant stats on GitHub or similar. In this post lets go through some of the reasons that a simplistic interpretation of the stats is often wrong.

These remarks all apply whether you're trying to assign credit for a photo, trying to do performance reviews for an employee or trying to manage a software project.

What to measure

There are broadly two ways to measure activity on the code in a repo. The first is additions/deletions of lines of code, where a modified line of code is usually measured as an addition and deletion. The second is number of commits or pull requests, which measures how many batches of changes are made. The problem with the latter is that different people have different styles - some produce big batches, some tiny batches - a factor of 10 between different developers is not unusual. There are also policy reasons that commits may be misleading - some projects squash multiple commits down to one when merging. The number of lines of code gives a better measure of what has changed, but it's merely better, not good - the rest of this post assumes people are focusing on number of lines of code changed.

All code is equal

Treating number of lines changed as the contribution assumes that every line is equally hard - but that's far from the case. At a previous company I worked on code that ranged from the internals of a compiler, to intricate C++ libraries, to Haskell GUI's. I estimate that I could produce 100x the volume of Haskell GUI's compared to C++ libraries. Other colleagues worked only only on the compiler, or only on GUIs - vastly changing how much code they produced per hour.

Similarly, each line of code is not equally important. Last week I wrote a few 100 lines of code. Of those, nearly all were done on Monday, and the remainder of the week involved a single line that is ferociously difficult with lots of obscure side conditions (libraries and link order...). That one line is super valuable, but simplistic measuring suggests I napped all Tuesday and Wednesday.

Code is attributed properly

Developers typically have user handles or email addresses that are used for code contributions. I currently have at least two handles, and in the past when we did stats on a $WORK project there were 6 different email addresses that I claimed ownership of. As a consequence, my work shows up under lots of different names, and counting it can be difficult. The longer a project runs, the more chance of developers changing identity.

The person who changed code did the work

A big part of software engineering is making old code obsolete. I was recently involved in deleting many thousands of lines that was no longer necessary. With a small team, we created a new project, implemented it, converted 90% of the uses over to the new code, and then stopped. Separately, someone else did the last 10% of the conversion, and then was able to delete a huge number of lines of code. There was definitely work in deleting the final bit of code, but the "labour" involved in that final deletion was mostly carried out months ago by others.

Similarly, when copying a new project in (often called vendoring) there is a big commit to add a lot of new code that was originally written by others, but which gets attributed to a named individual.

All code is in one place

Often projects will incorporate library code. For example, the official contribution of Niklas Broberg to HLint is 8 lines. However, he's called out explicitly in the README as someone whose contributions were essential to the project. In this case, because he wrote a library called haskell-src-exts without which HLint could not exist, and then continued to improve it for the benefit of HLint for many years.

Furthermore, projects like HLint rely on a compiler, libraries, operating system, and even a version control system. Usually these get overlooked when giving credit since they are relatively old and shared between many projects - but they are an essential part of getting things to work.

More code is better

The only purpose of code is to do a thing - whatever that thing might be. In all other ways, code is a liability - it has to be read, tested, compiled etc. Given the choice between 10 lines or 1,000,000 lines of code, I would always prefer 10 lines if they did the same thing. A smarter programmer who can do more with less lines of code is better. The famous quote attributed to Bill Gates is still true many decades later:

Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

Code is the essence

Measuring code suggests that code is the thing that matters. The code certainly does matter, but the code is just a representation of an underlying algorithm. The code follows a high-level design. Often much more significant contributions are made by picking the right design, algorithm, approach etc.

Code is all that matters

In a large project there is code, but the code doesn't exist in a vacuum. There are other code-adjacent tasks to be performed - discussions, mentoring, teaching, reporting issues, buying computers etc. Many of these are never reflected in the code, yet if omitted, the code wouldn't happen, or would happen slower.

OUR VALUED CUSTOMERS: To her boyfriend...

Perlsphere: Grant Extension Approved: Tony Cook (Maintaining Perl 5)

I'm pleased to announce that the Board of Directors approved Tony's request for another $20,000. It will allow him to dedicate another 400 hours to this work.

I would like to thank the community members who took time to comment on this grant extension request and our sponsors who made funding the grant possible through the Perl 5 Core Maintenance Fund.

Quiet Earth: Million Dollar Short THE SHIPMENT to Screen at Tribeca [Trailer]

The Tribeca Film Festival will screen The Shipment, a $1 million dollar father/daughter science fiction short, written, executive produced and directed by VFX artist and 3D animator Bobby Bala.

The epic intergalactic tale tells the story of a widowed cargo hauler who finds himself stranded with his daughter on a wretched spaceport after their old ship breaks down. Faced with an unscrupulous offer to escape, he faces a difficult dilemma that puts his morality to the ultimate test as he tries to provide a better life for his family.

“My goal was to create my first film with my daughter. I felt this would be a memorable experience we could both share for many years and it definitely was,” says director Bobby Bala.


Besides Bobby’s daughter and newcome [Continued ...]

Michael Geist: Rewriting Canadian Privacy Law: Commissioner Signals Major Change on Cross-Border Data Transfers

Faced with a decades-old private-sector privacy law that is no longer fit for the purpose in the digital age, the Office of the Privacy Commissioner of Canada (OPC) has embarked on a dramatic reinterpretation of the law premised on incorporating new consent requirements. My Globe and Mail op-ed notes the strained interpretation arose last Tuesday when the OPC released a consultation paper signalling a major shift in its position on cross-border data transfers.

Canadian privacy law has long relied on an “accountability principle” to ensure that organizations transferring personal information across borders to third parties are ultimately responsible for safeguarding that information. The Canadian approach maintained that it did not matter where the personal information was stored or who was involved in its processing, since the ultimate responsibility lay with the first organization to collect the data.

In fact, the OPC’s January 2009 guidelines on cross-border data transfers explicitly stated that “assuming the information is being used for the purpose it was originally collected, additional consent for the transfer is not required.” That guidance enabled Canadian companies to outsource data-processing activities to other jurisdictions so long as they used contractual provisions to guarantee appropriate safeguards.

The federal privacy commissioner seems ready to reverse that long-standing approach, stating that “a company that is disclosing personal information across a border, including for processing, must obtain consent.” It adds that “it is the OPC’s view that individuals would reasonably expect to be notified if their information was to be disclosed outside of Canada and be subject to the legal regime of another country.”

While the OPC position is a preliminary one – the office is accepting comments in a consultation until June 4 – there are distinct similarities with its attempt to add the right to be forgotten (the European privacy rule that allows individuals to request removal of otherwise lawful content about themselves from search results) into Canadian law. In that instance, despite the absence of a right-to-be-forgotten principle in the statute, the OPC simply ruled that it was reading in a right to de-index search results into PIPEDA (Canada’s Personal Information Protection and Electronic Documents Act). The issue is currently being challenged before the courts.

In this case, the absence of meaningful updates to Canadian privacy law for many years has led to another exceptionally aggressive interpretation of the law by the OPC, effectively seeking to update the law through interpretation rather than actual legislative reform.

The OPC is careful to note that it believes its position is consistent with Canada’s international trade obligations, but the issue could be subject to challenge. The Comprehensive and Progressive Trade Agreement for Trans-Pacific Partnership (CPTPP), the major Asia-based trade agreement that Canada implemented last year, features a commitment to allow cross-border transfers of information by electronic means.

The treaty limits restrictions on the open-border principle for data transfers, stipulating that any limitations may not be arbitrary, discriminatory or a disguised restriction on trade. Moreover, any limits cannot be greater than those required to achieve a legitimate policy objective. The Canada-U.S.-Mexico Agreement contains similar language.

The imposition of consent requirements for cross-border data transfers could be regarded as a non-tariff barrier to trade that impose restrictions greater than those required to achieve the objective of privacy protection. The interpretation is particularly vulnerable given that PIPEDA has long been said to provide such protections without the need for this additional consent regime.

Regardless of the international trade implications, however, the OPC approach would have enormous implications for e-commerce and data flows, with many organizations forced to rethink well-established data practices and compliance policies. Indeed, companies thinking of servicing the Canadian market would be forced to consider whether they must limit data transfers, likely adding cost and complexity to digital operations.

As Canadians express mounting concerns about their privacy online, tougher enforcement measures and better safeguards may be needed. Yet those issues are more properly addressed by government policy within a national data strategy and privacy law reform, not an OPC guideline that if enacted is likely to spark an avalanche of legal challenges.

The post Rewriting Canadian Privacy Law: Commissioner Signals Major Change on Cross-Border Data Transfers appeared first on Michael Geist.

new shelton wet/dry: The arrival of driverless cars could help us reduce light pollution

During the period known as the High Middle Ages, between 1100-1250, the Catholic Church built over 1400 Gothic churches in the Paris Basin alone. […] This thesis examines the implicit costs of building the Gothic churches of the Paris Basin built between 1100-1250, and attempts to estimate the percentage of the regional economy that was devoted [...]

things magazine: In memoriam

A 2015 project on gothic structure devoted a substantial amount of time to scanning Notre Dame, creating a precise 3D model. The project was the initiative of the late Andrew Tallon, who very tragically died just last November. His work … Continue reading

OCaml Weekly News: OCaml Weekly News, 16 Apr 2019

  1. Dune 1.9.0
  2. Minisat-ml: a reimplementation of minisat in OCaml
  3. Opam packages and CI
  4. opam 2.0.4 release
  5. Other OCaml News

Explosm.net: Comic for 2019.04.16

New Cyanide and Happiness Comic

Penny Arcade: News Post: Returner

Tycho: Having come back from Spokane with more or less my full complement of hit points, it wasn’t that bad really; Spokane was a kind of scorched lily pad that I made brief contact with on the way to Priest Lake.  There’s no reason you should know about Priest Lake!  But that’s where I went and I think I emerged with more sanity than I had when I went in. The Tales from the Loop game I run for Club PA takes place in Spokane, the Spokane I grew up in, which barely exists anymore.  That time has passed is no great surprise, but time moves much slower there than it…

Jesse Moynihan: Tarot Booklet Page 9

VII Justice and XI Strength vs. VII Strength and XI Justice There’s a long standing debate about the proper order for Strength and Justice. Traditionally Justice came first and Strength a few cards later. These older decks are referred to as the exoteric decks. A.E. Waite changed the order to fit to his way of […]

Penny Arcade: Comic: Returner

New Comic: Returner

Michael Geist: The LawBytes Podcast, Episode 7: What if Copyright Law Took Authors Rights Seriously? A Conversation with Professor Rebecca Giblin

What if copyright law took authors rights seriously?  Many groups claim to do so, but Professor Rebecca Giblin, one of the world’s leading experts on creator copyright, isn’t convinced. Professor Giblin argues that creators are often placed at the centre of the debate only to be largely ignored by other stakeholders. Professor Giblin joins this week’s Lawbytes podcast to talk about her Author’s Interest Project, the latest data, and why Canadian artist Bryan Adams may be on to something when it comes to his copyright reform proposal to benefit creators.

The podcast can be downloaded here and is embedded below. The transcript downloadable here. Subscribe to the podcast via Apple Podcast, Google Play, Spotify or the RSS feed. Updates on the podcast on Twitter at @Lawbytespod.

Episode Notes:

The Author’s Interest Project
Giblin, A new copyright bargain? Reclaiming lost culture and getting authors paid
Giblin, Fat horses and starving sparrows: on bullshit in copyright debates
Yuvaraj, Reversion laws: what’s happening elsewhere in the world?

Credits:

Wochitte Entertainment, Hachette Authors Urge Amazon Board To End Contract Dispute
CTV News, Bryan Adams speaks in Ottawa, urges change to copyright laws
TruTV, Adam Ruins Everything – How Mickey Mouse Destroyed the Public Domain
Reagan Library, President Reagan Signing the Berne Convention Implementation Act of 1988 on October 31, 1988
CBC News, Libraries and E-Licensing

Transcript by Temi downloadable here.

 

The post The LawBytes Podcast, Episode 7: What if Copyright Law Took Authors Rights Seriously? A Conversation with Professor Rebecca Giblin appeared first on Michael Geist.

Perlsphere: White Camel Awards for 2018

The White Camel Awards recognize outstanding, non-technical achievement in Perl. Started in 1999 by Perl mongers and later merged with The Perl Foundation, the awards committee selects three names from a long list of worthy Perl volunteers to recognize hard work in Perl Community, Perl Advocacy, and Perl User Groups. These awards have been managed by The Perl Review in conjunction with the The Perl Foundation.

For 2018, the White Camels recognize the efforts of these people whose hard work has made Perl and the Perl community a better place:

  • Perl Community - Todd Rinaldo, all sorts of things.

Todd Rinaldo has done plenty of technical work for Perl (you can blame him for taking . out of @INC), but's he's done quite a bit of non-technical work for the community as well. He's helped to organize Perl conferences and hackathons and provided other support through one of Perl's largest financial contributors, cPanel.

  • Perl Advocacy - David Farrell, for Perltricks and Perl.com

David Farrell started the PerlTricks.com site in 2013, then ressurrected Perl.com in 2017 (merging PerlTricks into it at the same time). He moved Perl.com to a GitHub repository that anyone can send pull requests to. Now it's easier than ever to not only create new content but update existing articles. He's also one of the co-organizers of the New York Perl mongers.

  • Perl User Groups - Max Maischein

Frankfurt.pm hosts the German and the Frankfurt Perl-Workshops and other special events (including YAPC::EU 2012). Max Maischein has been its treasurer since 2011, handling the accounting, reporting, and contracting, as well as coordinating work with other local organizations. Without that important work nothing could get done. He's part of the backbone of the German Perl community.

Congratulations to the 2018 winners!

Contributions to open source are largely in the form of code, as evidenced by the huge number of repos on github and distributions on CPAN. As we develop in public, users can see and recognize authors as they use the code. Community contributions can be less evident, and the White Camels were born as a way to recognize people who do non-technical work that can sometimes be less obvious and go unseen.

This year during the nomination process, there was some discussion about the focus of the White Camel and who is considered elegible. There was some confusion about technical vs. non-technical contributions of nominees with some feeling technical contributions should also be recognized in some form. There was also discussion about Perl 5 vs. Perl 6 contributions and whether that should be a consideration.

As with all things open source, the White Camels came from the community, and these are good discussions to continue as our community continues to evolve. Right now it's unclear what form the White Camels might take next year. The Perl Foundation will continue to support awards of some sort as a way to recognize contributors, typically through funding any costs associated with awards. Other leaders in the community will help determine what the awards might focus on, who we should recognize, and how. If you have new ideas for future awards or support continuing the White Camels as they are, we invite you to keep the discussion going.

OCaml Planet: The Mirage retreat: field trip report

Between March 6th and March 13th 2019, I attended the Mirage retreat organized by Hannes Mehnert in Marrakesh, Morocco.

The Mirage retreat takes place in an artist residency organized as a hostel (shared rooms with simple beds). Hannes gathers a lot of people whose activity is relevant to the Mirage project; some of them work within the OCaml ecosystem (but not necessarily Mirage), some work on system programming (not necessarily in OCaml). The whole place is for all of us for one week, excellent food is provided, and we get to do whatever we want. (Thanks to the work of the people there who make this possible.)

This was my second time attending the retreat – first time in November 2017. It is probably the work-related trip I enjoy most. When I get back home, I’m exhausted, thrilled, I met fascinating people and I learned a lot.

Marracheck

This week I came with a specific project (that was decided spontaneously maybe three weeks before that): I would work with Armaël Guéneau, also attending the retreat, on mixing ideas for the existing tools to check opam packages (opam-builder, opamcheck, opam-health-check), with the objective of building a tool that can build the whole opam-repository is less than a day on my laptop machine. The ultimate goal is to make it extremely easy for anyone to test the impact of a change to the OCaml compiler on the OCaml ecosystem.

We started with a lot of discussions on the design, inspired by our knowledge of opam-builder and opamcheck – I had hacked on opam-builder a bit before, and had detailed discussion with Damien about the design of opamcheck – and discussions with Kate, working on opam-health-check and in general the opam CI, also attending the retreat. Then we did a good bit of pair-programming, with Armaël behind the keyboard. We decided to rebuild a tool from scratch and to use the opam-libs (the library-level API of the opam codebase). By the end of the week, we were still quite far from a working release, but we have a skeleton in place.

This would not have been possible without the presence, at the retreat, of Louis Gesbert and Raja Boujbel, who helped us navigating the (sometimes daunting) opam API. (We also noticed a few opportunities for improvements and sent a couple pull-requests to opam itself.) Louis and Raja are impressive in their handling of the opam codebase. There are obscure and ugly and painful things within the opam APIs, but they come from elegance/simplicity/pragmatism compromises that they made, understand well, and are consistently able to justify. It feels like a complex codebase that is growing as it discovers its use-cases, with inevitable cruft, but good hands at work to manage this complexity, within resource limits.

Network drivers

My roommate was Fabian Bonk, who participated to the “ixy project”, at the TUM (Technische Universität München), Munich, Germany. The "ixy project’ aims to implement a simple userland network driver (for some specific Intel network card) in many different languages, and see what work and what doesn’t. Fabian wrote the OCaml implementation, and was interested in finding ways to improve its performances.

At first I preferred to hear about his work from a distance; I know nothing of network card, and there were people at the retreat noticeably more knowledgeable about writing high-performance OCaml code. Then I realized that this was a dangerous strategy: for essientally any topic there is someone more knowledgeable than you at the retreat. So why not work on userland network drivers?

Fabian and I made a few attempts at making the program faster, which had the somewhat hilarious result of making the program about 500x slower. It’s an interesting problem domain. The driver author says “I would really need to remove this copy here, even though that would require changing the whole Mirage API”; the first reaction is to argue that copying memory is actually quite fast, so it’s probably not the bottleneck. “But we have to copy”, they say, “ten gibibytes per second!”. Ouch.

Anyway, after some tinkering, I realized that Fabian working over SSH to a machine in Munich with the network card, and being able to run latency tests because “for this you need an optical splitter and I don’t have the privilege level to access the one our university has”, wasn’t that great for benchmarking. So I decided to convince Fabian to implement a compliant network card, in C, on his machine – he insists on calling it a “simulator” but what’s the difference between software and hardware these days? The idea was, anyone could then use his network card on their own machine to test and benchmark the driver against, and make it faster. Unfortunately, he had an OSX machine, and everything you would need to implement this nicely is sort of broken on OSX (working POSIX-compliant shared memory? nope!).

Crowbar

One thing I realized during the retreat is that I have an amazing (dis)advantage over some other people there (Hannes included): I know that Crowbar is extremely easy to use. (You write a generator, a quickcheck-style test, you listen to the tool tell you how to set a few weird environment variables, and boom, there come the bugs.)

Apparently people who haven’t tried Crowbar yet are unwilling to do so because they’re not sure how easy it is. Unfortunately, having the amazing power to find bugs is also a curse: Hannes persuaded me to write a test for an OCaml implementation of the patch utility instead of sleeping one evening.

I’m not sure what tool authors can do to reduce this particular barrier to entry. Maybe one thing that works is to simply demo the tool in front of a crowded audience, every time you have a chance.

Mystery in a box

I helped Antonio Monteiro track down a failure in his HTTP/2 implementation, and over the course of that I caught a glimpse of the angstrom codebase. Angstrom contains a single-field record (used for first-class polymorphism) in one apparently-central data structure, and I noticed that the record does not carry the [@@unboxed] annotation (so it is one extra indirection at runtime). So I decided to add the annotation, hoping it would improve performances.

There is a dirty secret about [@@unboxed]: despite what most people think, it is extremely rare that it can make programs noticeably faster, because the GC is quite fast and combines allocations together – many allocations of a boxed object are combined with an allocation for their content or container, and and often the indirection points to an immediately adjacent place in memory so is basically free to dereference. It may help in some extreme low-latency scenario where code is written to not allocate at all, but I have never personally seen a program where [@@unboxed] makes a noticeable performance difference.

That is, until angstrom. Adding [@@unboxed] to this record field makes the program noticeably slower. The generated code, at the point where this record is used, is much nicer, each run allocates less words, but the program is noticeably slower – 7% slower. I found it extremely puzzling; Romain @dinosaure Calabiscetta pointed out that he tried the same thing, and was similarly puzzled.

Eventually I lured Pierre Chambart into studying the problem with me, and we figured it out. I won’t get into the technical details here (hopefully later), but I’ll point out that we tested our hypothesis by inserting (); in several places in the program, and Armaël’s baffled look made it more than worthwhile.

Video games

When invited to work on “anything they wanted”, some people had a projects that sounded a bit more fun than a parallel compiler of all OPAM packages – a gameboy emulator, for example. Of course, everyone knows that having fun side-projects to work on from time to time is an excellent thing. Yet those were kept in the recent past after more pressing side-projects on my TODO list (typically releasing ocamlbuild, batteries, ppx_deriving and plugins once in a while, or reviewing a compiler Pull Request). While no one was working on that specifically, this retreat made me want to try (eventually) something that I’ve never done before, namely implement a video game in OCaml. We’ll see whether that happens someday.

Conclusion

Thanks to everyone who was at the retreat (including the people that worked hard to ensure we could be there in the best condition). I had a great time and I’m hoping to come again for one of the next retreats.

Explosm.net: Comic for 2019.04.15

New Cyanide and Happiness Comic

Perlsphere: Perl Toolchain Summit: People & Projects

The Perl Toolchain Summit (PTS) is taking place later this month in Marlow, in the UK, as previously announced. This event brings together maintainers of most of the key systems and tools in the CPAN ecosystem, giving them a dedicated 4 days to work together. In this post we describe how the attendees are selected, and how we decide what everyone will work on. We're also giving you a chance to let us know if there are things you'd like to see worked on.

This blog post is brought to you by cPanel, who we're happy to announce are a Platinum sponsor for the PTS. cPanel are a well-known user and supporter of Perl, and we're very grateful for their support. More about cPanel at the end of this article.

PTS People

The prime goal for PTS is to bring together the lead developers of the main components of "the CPAN ecosystem" and give them four dedicated days to work together. The goal is to assemble "the right people" -- for both Perl 5 and Perl 6 -- then lock them in a room and let the magic happen. The PTS is an invite-only event, to ensure there are no distractions.

The CPAN ecosystem is an ad-hoc agglomeration of services, tools, modules, and specifications, each of them worked on by teams of one or more people. It's structured somewhat like the Perl onion: at the core we have PAUSE, ExtUtils::MakeMaker and the like. Around that we have the widely used services like CPAN Testers and MetaCPAN. Further out we have CPANTS and developer tools.

This same structure is followed in deciding who is invited:

  • The core group is the approximately 10 people who maintain the core systems like PAUSE, ExtUtils::MakeMaker, Dist::Zilla, MetaCPAN, etc.
  • They nominate people who they think should attend, based on who has been doing good things on the CPAN ecosystem over the previous year. The top 10 from that list are invited, resulting in about 20 people.
  • Those 20 people go through another round of voting, and we end up with about 30 people.
  • We've had a number of experimental attendees, and the organisers occasionally invite someone who seems deserving but hasn't floated up through the voting.
  • On some years we've had remote participants, but to be honest that doesn't work so well, as the people on-site tend to get into their groove, and the focus can be quite dynamic.

Most people working on the CPAN ecosystem are volunteers, and we all know that commitment and time available waxes and wanes; people come and go. So over time the people who get invited to the PTS slowly evolves.

PTS Projects

An invite to the PTS effectively says "you're doing good things for our community, so we'd like to offer you a dedicated four days to work on them, with similar people around you". In addition to coding, it's a great chance to grab the right group of people to thrash out some knotty issue, or agree on some change to a thing going forward.

In the run-up to the PTS, the attendees start thinking about what they're going to work on, and record them on the event wiki's project page. This not only helps them organise their thoughts ahead of time, but lets people identify overlap, common interests, and interesting projects that they'd like to help with.

It also prompts people to nudge each other: "hey, any chance you could work on XYZ bug/feature, as that would enable me to …".

One of the big benefits of the PTS is the chance to grab people and discuss something at depth, possibly iterating over a number of days. This may be on how to solve some cross-system issue, work out how to take something forward, or how to make something new happen. The "River of CPAN" model came out of such discussions at this event in Berlin (when it was known as the QA Hackathon).

The first time I attended this event, I went with a long todo list. I worked on maybe half of them, worked on a bunch of things that came up during the 4 days, and went home with an even longer todo list, fired up for the coming months. I generally work at the periphery of the CPAN ecosystem, but seeing the value of this event is what prompted me to help out with organisation since my first year (and I know that the other organisers have similar feelings).

What would you like to see worked on at the PTS?

This is your chance to let the attendees know if there's something you'd like to see worked on at the PTS:

  • Is there a MetaCPAN issue that's bugging you?
  • Ideas to make life better for module developers?
  • Or better for module users?
  • A topic you'd like to see discussed?

If so, please let us know via the comments, or you can email the organising team: organisers at perltoolchainsummit

About cPanel

cPanel® is a web-based control panel for managing a web hosting account. It provides a simple yet powerful interface for managing email accounts, security, domains, databases, and more. cPanel was originally written (in Perl!) by Nick Koston in 1997. cPanel (the company) now employees over 200 people, and Nick is their CEO. They've been using Perl 5 for more than 20 years, and have long been supporters of Perl and its community. You may recognise some of their developers who are CPAN authors: Todd Rinaldo (TODDR), Mark Gardner (MJGARDNER), Nicolas Rochelemagne (ATOOMIC), and others. Their CEO, Nick, still develops in Perl today.

We are very grateful to cPanel for continuing to support the Toolchain Summit.

The Shape of Code: The Algorithmic Accountability Act of 2019

The Algorithmic Accountability Act of 2019 has been introduced to the US congress for consideration.

The Act applies to “person, partnership, or corporation” with “greater than $50,000,000 … annual gross receipts”, or “possesses or controls personal information on more than— 1,000,000 consumers; or 1,000,000 consumer devices;”.

What does this Act have to say?

(1) AUTOMATED DECISION SYSTEM.—The term ‘‘automated decision system’’ means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.

That is all encompassing.

The following is what the Act is really all about, i.e., impact assessment.

(2) AUTOMATED DECISION SYSTEM IMPACT ASSESSMENT.—The term ‘‘automated decision system impact assessment’’ means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security that includes, at a minimum—

I think there is a typo in the following: “training, data” -> “training data”

(A) a detailed description of the automated decision system, its design, its training, data, and its purpose;

How many words are there in a “detailed description of the automated decision system”, and I’m guessing the wording has to be something a consumer might be expected to understand. It would take a book to describe most systems, but I suspect that a page or two is what the Act’s proposers have in mind.

(B) an assessment of the relative benefits and costs of the automated decision system in light of its purpose, taking into account relevant factors, including—

Whose “benefits and costs”? Is the Act requiring that companies do a cost benefit analysis of their own projects? What are the benefits to the customer, compared to a company not using such a computerized approach? The main one I can think of is that the customer gets offered a service that would probably be too expensive to offer if the analysis was done manually.

The potential costs to the customer are listed next:

(i) data minimization practices;

(ii) the duration for which personal information and the results of the automated decision system are stored;

(iii) what information about the automated decision system is available to consumers;

This act seems to be more about issues around data retention, privacy, and customers having the right to find out what data companies have about them

(iv) the extent to which consumers have access to the results of the automated decision system and may correct or object to its results; and

(v) the recipients of the results of the automated decision system;

What might the results be? Yes/No, on a load/job application decision, product recommendations are a few.

Some more potential costs to the customer:

(C) an assessment of the risks posed by the automated decision system to the privacy or security of personal information of consumers and the risks that the automated decision system may result in or contribute to inaccurate, unfair, biased, or discriminatory decisions impacting consumers; and

What is an “unfair” or “biased” decision? Machine learning finds patterns in data; when is a pattern in data considered to be unfair or biased?

In the UK, the sex discrimination act has resulted in car insurance companies not being able to offer women cheaper insurance than men (because women have less costly accidents). So the application form does not contain a gender question. But the applicants first name often provides a big clue, as to their gender. So a similar Act in the UK would require that computer-based insurance quote generation systems did not make use of information on the applicant’s first name. There is other, less reliable, information that could be used to estimate gender, e.g., height, plays sport, etc.

Lots of very hard questions to be answered here.

Trivium: 14apr2019

things magazine: Entropy online

A question seeking recommendations of examples of Post-post-collapse fiction, from where we get to Stand Still, Stay Silent, a webcomic about a Scandinavian future. We can’t vouch for the accuracy of the language tree, but it’s a beautiful image. At … Continue reading

Jesse Moynihan: Tarot Booklet Page 8

VII The Chariot A charioteer embedded in a cube, which is nestled in the earth and being pulled by golden winged horses who are looking in the same direction, yet moving apart. They are the same color as the dog who chased the Fool. Perhaps they are an evolution of the Lovers in the previous […]

Explosm.net: Comic for 2019.04.14

New Cyanide and Happiness Comic

Daniel Lemire's blog: Science and Technology links (April 13th 2019)

  1. There is little evidence that digital screens are harmful to teenager’s mental health. If there is an effect, it is small.
  2. Cotton bags must be reused thousands of times before they match the environmental performance of plastic bags. Organic cotton bags are much worse than regular ones, requiring 20,000 reuse instead of only 7,000, due to the lower yield of organic farming. Cotton bags cannot be recycled. Paper bags must be reused dozens of times to have the same environmental impact as single-use plastic bags. For extra points, compute how many years you need to use an organic cotton bag, at a rate of two utilization a week, to use it 20,000 times. Why are we outlawing plastic bags, and not reusable organic cotton bags?
  3. I never understood the appeal of artificial-intelligence system that take very little input from human beings (self-taught software). Rich Sutton makes a powerful case for it:

    The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.

    To put it another way, our most powerful weapon for ‘smarter’ software is to design systems that get better as we add more computational power, and then to add the computational power.

    The net trend is to build software that looks more and more like ‘brute force’ at a high level, but with increasing sophistication in the computational substrate to provide the necessary brute force.

  4. Goldstein, Qvist and Pinker make a powerful case for nuclear power in the New York Times. Nuclear power is safe, clean, relatively inexpensive and environmentally friendly. Renewal energies are not the solution despite all the propaganda at the moment:

    Where will this gargantuan amount of carbon-free energy come from? The popular answer is renewables alone, but this is a fantasy. Wind and solar power are becoming cheaper, but they are not available around the clock, rain or shine, and batteries that could power entire cities for days or weeks show no sign of materializing any time soon. Today, renewables work only with fossil-fuel backup. Germany, which went all-in for renewables, has seen little reduction in carbon emissions.

  5. Human beings have better color perception than most other mammals.

    Humans, some primates, and some marsupials see an extended range of colors, but only by comparison with other mammals. Most non-mammalian vertebrate species distinguish different colors at least as well as humans, and many species of birds, fish, reptiles and amphibians, and some invertebrates, have more than three cone types and probably superior color vision to humans.

    So why would human beings have superior color vision compared to other mammals?

    A recent evolutionary account posits that trichromacy facilitates detecting subtle skin color changes to better distinguish important social states related to proceptivity, health, and emotion in others.

  6. As you age, your working memory degrades. A Nature article reports on how this can be reversed with electric brain stimulation.
  7. Genetically modified plants (GMOs) have reduced pesticide use by 37% while improving yields by 22%. Though no new technology is free from risk, neither lower yields nor higher pesticide use are free from risk.
  8. The poverty rate in China went from 34.5% of the population to 0.7% of the population between 2001 and 2015.

things magazine: Echo Chambers

A short guide to Broadcast, retrofuturist pop & hauntological pop. See also this earlier collection of music loosely grouped as Folk Horror. There are endearing fictional diversions in the genre, such as Hereford Wakes, ‘a five-part ITV children’s drama originally … Continue reading

Daniel Lemire's blog: Why are unrolled loops faster?

A common optimization in software is to “unroll loops”. It is best explained with an example. Suppose that you want to compute the scalar product between two arrays:

  sum = 0;
  for (i = 0; i < length; i++)
    sum += x[i] * y[i];

An unrolled loop might look as follows:

  sum = 0;
  i = 0;
  if (length > 3)
    for (; i < length - 3; i += 4)
      sum += x[i] * y[i] + x[i + 1] * y[i + 1] +
             x[i + 2] * y[i + 2] + x[i + 3] * y[i + 3];
  for (; i < length; i++)
    sum += x[i] * y[i];

Mathematically, both pieces of code are equivalent. However, the unrolled version is often faster. In fact, many compilers will happily (and silently) unroll loops for you (though not always).

Unrolled loops are not always faster. They generate larger binaries. They require more instruction decoding. They use more memory and instruction cache. Many processors have optimizations specific to small tight loops: manual loop unrolling generating dozens of instructions within the loop tend to defeat these optimizations.

But why would unrolled loops be faster in the first place? One reason for their increased performance is that they lead to fewer instructions being executed.

Let us estimate the number of instructions that we need to be executed with each iteration of the simple (rolled) loop. We need to load two values into registers. We need to execute a multiplication. And then we need to add the product to the sum. That is a total of four instructions. Unless you are cheating (e.g., by using SIMD instructions), you cannot do better than four instructions.

How many instruction do we measure per iteration of the loop? Using a state-of-the-art compiler (GNU GCC 8), I get 7 instructions. Where do these 3 extra instructions come from? We have a loop counter which needs to be incremented. Then this loop counter must be compared with the end-of-loop condition, and finally there is a branch instruction. These three instructions are “inexpensive”. There is probably some instruction fusion happening and other clever optimizations. Nevertheless, these instructions are not free.

Let us grab the numbers on an Intel (Skylake) processor:

amount of unrolling instructions per pair cycles per pair
1 7 1.6
2 5.5 1.6
4 5 1.3
8 4.5 1.4
16 4.25 1.6

My source code is available.

The number of instructions executed diminishes progressively (going toward 4) as the overhead of the loop becomes smaller and smaller due to unrolling. However, the speed, as measured in number of cycles, does not keep on decreasing: the sweet spot is about 4 or 8 unrolling. In this instance, unrolling is mostly beneficial because of the reduced instruction overhead of the loop… but too much unrolling will eventually harm the processing.

There are other potential benefits of loop unrolling in more complicated instances. For example, some loaded values can be carried between loop iterations, thus saving load instructions. If there are branches within the loop, it may help or harm branch prediction to unroll. However, I find that a reduced number of instructions is often in the cards.

Michael Geist: Open Banking Is Already Here: My Appearance Before the Senate Standing Committee on Banking, Trade and Commerce

The Senate Standing Committee on Banking, Trade and Commerce has spent the past month and a half actively engaged in a detailed study of the regulatory framework for open banking. The study has included government officials, representatives from Australia and the UK, and Canadian banking stakeholders. I appeared before the committee yesterday as a single person panel, spending a full hour discussing a wide range of policy concerns.  My core message was that the committee debate over whether Canada should have open banking missed the bigger issue that millions of Canadians already use open banking type services despite the friction in making their data easily portable to third party providers. I recommended several reforms in response, including stronger privacy laws, mandated data portability with informed consumer consent, and consumer protection safeguards that recognizing the likely blurring between incumbent banks and third party providers.

My full opening statement is posted below.

Appearance before the Senate Standing Committee on Banking, Trade and Commerce, April 11, 2019

Good morning. My name is Michael Geist.  I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Internet and E-commerce Law, and I am a member of the Centre for Law, Technology, and Society. My areas of speciality include digital policy, intellectual property, privacy and the Internet. I appear in a personal capacity representing only my own views.

This committee’s study on open banking has been exceptionally interesting and insightful, providing far more context, nuance, and information than the Department of Finance consultation on the issue.

Yet the review has left me somewhat puzzled. Open banking is typically framed – both before this committee, by the government consultation, and in the media – as a matter of “if” or sometimes “when”.  In other words, some debate whether we need it and others suggest that it is only a matter time.

However, I believe the record confirms that open banking is effectively already here. While the banks have largely not provided data portability to their customers, millions of Canadians already provide their banking data to third parties, who frequently use screen scraping to gain access to the banking information. This is presumably provided with customer consent since they are the ones providing the necessary login information.

The screen scraping approach is widely recognized as risky given questions about security of the sensitive data including login information, the identity of the third parties, and the absence of industry standards. The willingness to use these third party services, even in the face of the friction that exists without easy data portability, points to the real risk for government policy.

In my view, that real risk lies in doing nothing, not doing something.

The prospect of account aggregation, the use of AI, and the identification of alternative products and services may sometimes only come from a third party provider. We need to act – and act quickly – to facilitate a marketplace that responds to customer demands, fosters innovation, and addresses longstanding consumer frustrations with a banking system that invariably insists trading cost competitiveness for “stability” is a virtue. If we adopt a consumer-centric perspective on the issue, we should recognize that consumers have demonstrated their interest in open banking but they have been placed at risk by banks that make it difficult to port their data and by the absence of associated policies and effective privacy safeguards.

I’ve heard several senators ask witnesses what can or should be done. I’ll offer three recommendations.

First, Canada’s private sector privacy law must be updated. Simply put, the law was drafted more than two decades ago and is no longer fit for purpose. There are important debates about the legal protections for data, but the immediate issue is that Canadians rely on PIPEDA for their statutory protections. This law does not have an effective enforcement mechanism, meaning there is limited recourse in the event of a potential misuse, whether by the big banks or by a third party provider.

Moreover, privacy law standards that are increasingly common in other jurisdictions are simply absent from the Canadian landscape. In fact, the Privacy Commissioner of Canada has recently taken to reinterpreting the law as a means of expanding its scope and relevance.  For example, earlier this week, the OPC released a new consultation that included its preliminary view that it now believes that cross-border disclosures of personal information require prior consent. The approach is a significant reversal of longstanding policy that relied upon the accountability principle to ensure that organizations transferring personal information to third parties are ultimately responsible for safeguarding that information.

This change in approach has enormous implications for e-commerce, data flows and potentially open banking. It points yet again to the need for legislative review and reform of the law, rather than OPC guidelines that if adopted will likely end up being challenged in Canadian courts.

Second, the government needs to mandate data portability for consumer and small business banking.  The major banks may talk sweetly about their potential support for open banking, but it was only in 2017 that the Canadian Bankers Association was issuing warnings about the open banking risks to consumers and the economy as a whole.

Third party innovative services exist precisely because they offer products and services not offered by the big banks. The only way to restore the safety of Canadian consumers who face real risks with screen scraping is to mandate that their data must be openly shared by the banks where the customer provides an informed consent to do so.  There are undoubtedly security protocols and standards to be developed, but the starting point is regulated support for a consumer-focused system that gives consumer control by opening their data at their request.

Third, as the committee identifies consumer protections and other safeguards, recognize that the difference between the big banks and third party financial providers will become increasingly blurry for many Canadians. That blurring already exists in other sectors – think telecom and the incumbent providers who operate alongside third party services such as Skype, WhatsApp, and a host of other services that offer functionality once limited to the incumbent providers.

The same will be ultimately be true in banking as consumers come to rely on new service providers that offer services alongside the big banks. That suggests that consumer protections and the identification of risks should take a big picture perspective. In fact, just yesterday, the CBC reported that a report from the Financial Consumer Agency of Canada about aggressive sales tactics by the banks underwent revisions after early drafts were provided to the government and the banking sector. The revisions included the removal of proposed consumer protections.

In other words, we should not pretend that it is only new technologies and third parties that bring with them consumer risks.

I look forward to your questions.

The post Open Banking Is Already Here: My Appearance Before the Senate Standing Committee on Banking, Trade and Commerce appeared first on Michael Geist.

Perlsphere: Maintaining Perl 5 (Tony Cook): March 2019 Grant Report

This is a monthly report by Tony Cook on his grant under Perl 5 Core Maintenance Fund. We thank the TPF sponsors to make this grant possible.

Approximately 15 tickets were reviewed.

[Hours]         [Activity]
  1.00          #131115 debugging, comment
  3.29          #132782 work on tests, testing
                #132782 debugging, comment with tests and about the
                patches supplied.
  3.61          #133888 debugging
                #133888 debugging, review code
                #133888 more debugging, code review and comment
  0.97          #133906 debugging, comment
  2.20          #133913 debugging, test a possible fix, comment with a
                different patch
  0.87          #133922 debugging, comment
  0.70          #133931 research and comment
  3.22          #133936 research, work on tests and a fix
                #133936 more testing, work on docs, comment with patch
  0.59          #133938 not-so-briefly comment
                #133938 comment on cpan ticket too
  0.93          #133949 debugging, find the problem, comment
  3.22          #133951 work on built-in getcwd, fix to
                write_buildcustomize, integration into Cwd.pm
                #133951 re-work a little, testing, fixes
                #133951 polish, more testing, comment with patch
  0.40          #133953 testing, comment
  1.47          #133958 prep, testing
  1.07          discussion with khw on locale leak
  1.60          Review 5.28.2 proposed backports, votes and backports
======
 25.14 hours total

things magazine: Small snippets

Unrailed, a new game about an endless railway (via RPS) / crafty stuff and Hole and Corner / Barcelona from above / Cambridge from the rooftops / London as a leaky sieve of laundered property money / speaking of which, … Continue reading

The Geomblog: New conference announcement

Martin Farach-Colton asked me to mention this, which is definitely NOT a pox on computer systems. 
ACM-SIAM Algorithmic Principles of Computer Systems (APoCS20) 
https://www.siam.org/Conferences/CM/Main/apocs20January 8, 2020
Hilton Salt Lake City Center, Salt Lake City, Utah, USA
Colocated with SODA, SOSA, and Alenex 
The First ACM-SIAM APoCS is sponsored by SIAM SIAG/ACDA and ACM SIGACT. 
Important Dates:  
        August 9: Abstract Submission and Paper Registration Deadline
August 16: Full Paper Deadline
October 4: Decision Announcement 
Program Chair: Bruce Maggs, Duke University and Akamai Technologies 
Submissions: Contributed papers are sought in all areas of algorithms and architectures that offer insight into the performance and design of computer systems.  Topics of interest include, but are not limited to algorithms and data structures for: 

  • Databases
  • Compilers
  • Emerging Architectures
  • Energy Efficient Computing
  • High-performance Computing
  • Management of Massive Data
  • Networks, including Mobile, Ad-Hoc and Sensor Networks
  • Operating Systems
  • Parallel and Distributed Systems
  • Storage Systems

A submission must report original research that has not previously or is not concurrently being published. Manuscripts must not exceed twelve (12) single-spaced double-column pages, in addition the bibliography and any pages containing only figures.  Submission must be self-contained, and any extra details may be submitted in a clearly marked appendix. 
Steering Committee: 

  • Michael Bender
  • Guy Blelloch
  • Jennifer Chayes
  • Martin Farach-Colton (Chair)
  • Charles Leiserson
  • Don Porter
  • Jennifer Rexford
  • Margo Seltzer

Perlsphere: What's new on CPAN - March 2019

Welcome to “What’s new on CPAN”, a curated look at last month’s new CPAN uploads for your reading and programming pleasure. Enjoy!

APIs & Apps

Config & Devops

  • App::ucpan updates CPAN modules with easy-to-read information
  • Dotenv supports per-environment configurations, which is the only way to do them in 12 factor apps

Data

Hardware

Web

things magazine: Air Train

Our horrid future, part 235. For the tech elite, luxury means getting away from what they created. Meanwhile, SF continues to descend into a social hellscape for anyone who isn’t wallowing in venture capital / surprisingly, there is still so … Continue reading

OCaml Planet: opam 2.0.4 release

We are pleased to announce the release of opam 2.0.4.

This new version contains some backported fixes:

Note: To homogenise macOS name on system detection, we decided to keep macos, and convert darwin to macos in opam. For the moment, to not break jobs & CIs, we keep uploading darwin & macos binaries, but from the 2.1.0 release, only macos ones will be kept.


Installation instructions (unchanged):

  1. From binaries: run
    sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)

    or download manually from the Github “Releases” page to your PATH. In this case, don’t forget to run opam init --reinit -ni to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.

  2. From source, using opam:
    opam update; opam install opam-devel

(then copy the opam binary to your PATH as explained, and don’t forget to run opam init --reinit -ni to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)

  1. From source, manually: see the instructions in the README.

We hope you enjoy this new minor version, and remain open to bug reports and suggestions.

NOTE: this article is cross-posted on opam.ocaml.org and ocamlpro.com. Please head to the latter for the comments!

i like this art: Daniel Gustav Cramer

 

Daniel Gustav Cramer

Work from his oeuvre.

“Daniel Gustav Cramer is known best for his sparse aesthetic in multiple mediums. An ambitious ongoing series, simply titled “Works” (begun 2009), is comprised of a variegated range of work including film, sculptures, installations, and photography. In many of these, he seeks out unspectacular scenes, but ones that both have a quality of vastness and intimate or personal experience—in other words, how things can appear distant and close all at once. Subjects have included lone boat journeys, vast and foggy mountain ranges, and dense forests with traces of human presence. He treats individual pieces as series unto themselves by composing a sequence of images illustrating lapses in time. Cramer is also interested in the idea of memory and the infinite, which he explores via books and archives as medium and subject.” – Artsy

Michael Geist: Canadian Privacy Commissioner Signals Major Shift in Approach on Cross-Border Data Transfers

The Office of the Privacy Commissioner of Canada has released a consultation paper that signals a major shift in its position on data transfers, indicating that it now believes that cross-border disclosures of personal information require prior consent. The approach is a significant reversal of longstanding policy that relied upon the accountability principle to ensure that organizations transferring personal information to third parties are ultimately responsible for safeguarding that information. In fact, OPC guidelines from January 2009 explicitly stated that “assuming the information is being used for the purpose it was originally collected, additional consent for the transfer is not required.”

The federal privacy commissioner now says that “a company that is disclosing personal information across a border, including for processing, must obtain consent”, adding that “it is the OPC’s view that individuals would reasonably expect to be notified if their information was to be disclosed outside of Canada and be subject to the legal regime of another country.”  While this position is a preliminary one – the office is accepting comments in a consultation until June 4, 2019 – there are distinct similarities with the OPC’s approach on the right to be forgotten.  In that instance, despite the absence of a right to be forgotten principle under Canadian law, the office simply decided that it was reading in a right to de-index search results into PIPEDA. The issue is currently before the courts.

In this case, the absence of meaningful updates to Canadian privacy law for many years has led to another exceptionally aggressive interpretation of the law by the OPC, effectively seeking to update the law through interpretation rather than actual legislative reform. Since PIPEDA’s inception, the accountability principle has been touted as a foundational aspect of the law, providing assurance that Canadians’ privacy is protected regardless of where it goes or who processes it. Yet the OPC seemingly now doubts that view, suggesting that there are risks associated with data that leaves the country.

The OPC is careful to note that it believes its position is consistent with Canada’s international trade obligations, but the issue could be subject to challenge. Article 14.11 of the CPTPP requires Canada (and all parties) to allow cross-border transfer of information by electronic means. The article states that:

Nothing in this Article shall prevent a Party from adopting or maintaining measures inconsistent with paragraph 2 to achieve a legitimate public policy objective, provided that the measure:
(a) is not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination or a disguised restriction on trade; and
(b) does not impose restrictions on transfers of information greater than are required to achieve the objective.

The imposition of consent requirements for cross-border data transfers could be regarded as imposing restrictions greater than required to achieve the objective of privacy protection, given that PIPEDA has long been said to provide such protections through accountability without the need for this additional consent regime.

Regardless of the international trade implications, however, the OPC interpretation would have enormous implications for e-commerce and data flows with many organizations forced to rethink longstanding compliance policies. The proposal is sure to generate opposition with some understandably asking whether the issue would be more properly addressed by government policy within a national data strategy and privacy law reform, rather than an OPC guideline that if enacted is likely to end up in the Canadian courts.

The post Canadian Privacy Commissioner Signals Major Shift in Approach on Cross-Border Data Transfers appeared first on Michael Geist.

OCaml Planet: opam 2.0.4 release

We are pleased to announce the release of opam 2.0.4.

This new version contains some backported fixes:

Note: To homogenise macOS name on system detection, we decided to keep macos, and convert darwin to macos in opam. For the moment, to not break jobs & CIs, we keep uploading darwin & macos binaries, but from the 2.1.0 release, only macos ones will be kept.


Installation instructions (unchanged):

  1. From binaries: run

    sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)

    or download manually from the Github "Releases" page to your PATH. In this case, don't forget to run opam init --reinit -ni to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.

  2. From source, using opam:

    opam update; opam install opam-devel

    (then copy the opam binary to your PATH as explained, and don't forget to run opam init --reinit -ni to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)

  3. From source, manually: see the instructions in the README.

We hope you enjoy this new minor version, and remain open to bug reports and suggestions.

NOTE: this article is cross-posted on opam.ocaml.org and ocamlpro.com. Please head to the latter for the comments!

OCaml Planet: Dune 1.9.0

Tarides is pleased to have contributed to the dune 1.9.0 release which introduces the concept of library variants. Thanks to this update, unikernels builds are becoming easier and faster in the MirageOS universe! This also opens the door for a better cross-compilation story, which will ease the addition of new MirageOS backends (trustzone, ESP32, RISC-V, etc.)

This post has also been posted to the Dune blog. See also the the discuss forum for more details.

Dune 1.9.0

Changes include:

  • Coloring in the watch mode (#1956)
  • $ dune init command to create or update project boilerplate (#1448)
  • Allow "." in c_names and cxx_names (#2036)
  • Experimental Coq support
  • Support for library variants and default implementations (#1900)

Variants

In dune 1.7.0, the concept of virtual library was introduced: https://dune.build/blog/virtual-libraries/. This feature allows to mark some abstract library as virtual, and then have several implementations for it. These implementations could be for multiple targets (unix, xen, js), using different algorithms, using C code or not. However each implementation in a project dependency tree had to be manually selected. Dune 1.9.0 introduces features for automatic selection of implementations.

Library variants

Variants is a tagging mechanism to select implementations on the final linking step. There's not much to add to make your implementation use variants. For example, you could decide to design a bar_js library which is the javascript implementation of bar, a virtual library. All you need to do is specificy a js tag using the variant option.

(library
 (name bar_js)
 (implements bar)
 (variant js)); <-- variant specification

Now any executable that depend on bar can automatically select the bar_js library variant using the variants option in the dune file.

(executable
 (name foo)
 (libraries bar baz)
 (variants js)); <-- variants selection

Common variants

Language selection

In your projects you might want to trade off speed for portability:

  • ocaml: pure OCaml
  • c: OCaml accelerated by C

JavaScript backend

  • js: code aiming for a Node backend, using Js_of_ocaml

Mirage backends

The Mirage project (mirage.io) will make extensive use of this feature in order to select the appropriate dependencies according to the selected backend.

  • unix: Unikernels as Unix applications, running on top of mirage-unix
  • xen: Xen backend, on top of mirage-xen
  • freestanding: Freestanding backend, on top of mirage-solo5

Default implementation

To facilitate the transition from normal libraries into virtuals ones, it's possible to specify an implementation that is selected by default. This default implementation is selected if no implementation is chosen after variant resolution.

(library
 (name bar)
 (virtual_modules hello)
 (default_implementation bar_unix)); <-- default implementation selection

Selection mechanism

Implementation is done with respect to some priority rules:

  • manual selection of an implementation overrides everything
  • after that comes selection by variants
  • finally unimplemented virtual libraries can select their default implementation

Libraries may depend on specific implementations but this is not recommended. In this case, several things can happen:

  • the implementation conflicts with a manually selected implementation: resolution fails.
  • the implementation overrides variants and default implementations: a cycle check is done and this either resolves or fails.

Conclusion

Variant libraries and default implementations are fully documented here. This feature improves the usability of virtual libraries.

This commit shows the amount of changes needed to make a virtual library use variants.

Coq support

Dune now supports building Coq projects. To enable the experimental Coq extension, add (using coq 0.1) to your dune-project file. Then, you can use the (coqlib ...) stanza to declare Coq libraries.

A typical dune file for a Coq project will look like:

(include_subdirs qualified) ; Use if your development is based on sub directories

(coqlib
  (name Equations)                  ; Name of wrapper module
  (public_name equations.Equations) ; Generate an .install file
  (synopsis "Equations Plugin")     ; Synopsis
  (libraries equations.plugin)      ; ML dependencies (for plugins)
  (modules :standard \ IdDec)       ; modules to build
  (flags -w -notation-override))    ; coqc flags

See the documentation of the extension for more details.

Credits

This release also contains many other changes and bug fixes that can be found on the discuss announce.

Special thanks to dune maintainers and contributors for this release: @rgrinberg, @emillon, @shonfeder and @ejgallego!

Tea Masters: Le temps des Oolongs


OB impérial 2016
 Dimanche prochain, je pars pour NYC et la Pennsylvanie pendant 10 jours. Ce sera de nouveau l'occasion de donner des cours de thé avec Teaparker à un grand nombre d'étudiants américains passionnés de thé chinois. Aussi, aujourd'hui, je comptais faire un tour à Alishan pour sélectionner du Jinxuan printanier. Mais quand j'appelai les fermiers, ils me dirent qu'ils ne commenceront les récoltes qu'après le 15 avril et pour le qingxin Oolong, il faudra même attendre le 24 pour que les productions se mettent en route! Je m'occuperai donc de cela dès mon retour à Taiwan!

Pour l'instant, mon seul thé de 2019 est ce Dong Pian de SiJiChun. Le temps avant QingMing ne fut pas trop bon dans le nord de Taiwan, et c'est pourquoi j'ai fait une croix sur les BiLuoChun de San Hsia cette année. Cela nous rappelle que le thé est un produit de la nature et sensible à ses variations. Et le temps du Oolong n'est pas celui du thé vert. Le Oolong a besoin d'une maturité et ne se récolte pas au moment où il ne fait que bourgeonner. L'exception est le Oolong Beauté Orientale de haute qualité.
Mais la raison pour cette exception, c'est que ce thé n'est pas issu de la première récolte du printemps, mais de la seconde, lorsqu'il commence à faire plus chaud et que la finesse des arômes ne se retrouve que dans les bourgeons mordus par nos petits criquets verts. Le résultat est un Oolong à forte oxydation aux senteurs de parfum féminin et mystérieux. Caliente! Le plus latin des Oolongs de Formose!
Le temps du thé vert est court, mais celui du Oolong est long. Les récoltes en plaine ne s'arrêtent pratiquement jamais. Les photos de plantation de cet article datent du 21 février, à Mingjian. On voit que les bourgeons poussent au milieu de l'hiver.
Et l'on voit la nécessité d'arroser les théiers durant cette saison sèche. C'est d'ailleurs un manque d'eau dans le centre de Taiwan qui explique aussi le retard dans la croissance des feuilles.
C'est aussi parce que le temps des Oolongs est long qu'ils se conservent bien sur plusieurs années et que je me régale ces jours-ci avec des récoltes de 2016/17!

Planet Lisp: Lispers.de: Berlin Lispers Meetup, Monday, 15th April 2019

We meet again on Monday 8pm, 15th April. Our host this time is James Anderson (www.dydra.com).

Berlin Lispers is about all flavors of Lisp including Emacs Lisp, Common Lisp, Clojure, Scheme.

We will have two talks this time.

Hans Hübner will tell us about "Reanimating VAX LISP - A CLtL1 implementation for VAX/VMS".

And Ingo Mohr will continue his talk "About the Unknown East of the Ancient LISP World. History and Thoughts. Part II: Eastern Common LISP and a LISP Machine."

We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.

OCaml Weekly News: OCaml Weekly News, 09 Apr 2019

  1. Easy_logging 0.2
  2. OCaml Users and Developers Meeting 2019
  3. Firewall-tree - demo using MirageOS is available, with overview of current progress
  4. routes: path based routing for web applications
  5. Other OCaml News

Quiet Earth: Sid Haig is HIGH ON THE HOG in New Horror Comedy [Trailer]

Indican Pictures has picked up distribution rights to Tony Wash’s High on the Hog. The film stars Sid Haig (3 From Hell), Joe Estevez (Public Enemy) and Robert Z’dar (Maniac Cop I & II & III), in his final role.

The film is a genre bender as Big Daddy (Haig) fights against the government, to maintain his grow-op. Haig is also the producer on this grindhouse feature.


Part comedy and part horror, High on the Hog takes place on a remote farm. Here, everything is grown, including the green herb. But, local government officials need a bust and t [Continued ...]

Quiet Earth: HELLBOY R-Rated "Sizzle Reel"! Movie Hits April 12

I'm such a huge fan of Neil Marshall that I can't help but wish he was doing an original film, but if there was any comic book property that fits his R-rated sensibilities than it's probably Hellboy.

Yes, there is some controversy with the fact that Del Toro didn't come back and Ron Perlman isn't playing the titular character, but Stranger Things' David Harbour does seem like a great fit if it has to be anyone else in the role. So, I feel like it all evens out in the wash for fans... as long as the film ends up being good.


Synopsis:
Based on the graphic novels by Mike Mignola, Hellboy, caught between the worlds of the supernatural and human, battles an ancient sorceress bent on revenge.


Hellboy stars David Harbour, Milla Jovovich and Ian McShane.


Hell [Continued ...]

bit-player: 737: The MAX Mess

Controlled Flight into Terrain is the aviation industry’s term for what happens when a properly functioning airplane plows into the ground because the pilots are distracted or disoriented. What a nightmare. Even worse, in my estimation, is Automated Flight into Terrain, when an aircraft’s control system forces it into a fatal nose dive despite the frantic efforts of the crew to save it. That is the conjectured cause of two recent crashes of new Boeing 737 MAX 8 airplanes. I’ve been trying to reason my way through to an understanding of how those accidents could have happened.

Disclaimer: The investigations of the MAX 8 disasters are in an early stage, so much of what follows is based on secondary sources—in other words, on leaks and rumors and the speculations of people who may or may not know what they’re talking about. As for my own speculations: I’m not an aeronautical engineer, or an airframe mechanic, or a control theorist. I’m not even a pilot. Please keep that in mind if you choose to read on.

The accidents

Early on the morning of October 29, 2018, Lion Air Flight 610 departed Jakarta, Indonesia, with 189 people on board. The airplane was a four-month-old 737 MAX 8—the latest model in a line of Boeing aircraft that goes back to the 1960s. Takeoff and climb were normal to about 1,600 feet, where the pilots retracted the flaps (wing extensions that increase lift at low speed). At that point the aircraft unexpectedly descended to 900 feet. In radio conversations with air traffic controllers, the pilots reported a “flight control problem” and asked about their altitude and speed as displayed on the controllers’ radar screens. Cockpit instruments were giving inconsistent readings. The pilots then redeployed the flaps and climbed to 5,000 feet, but when the flaps were stowed again, the nose dipped and the plane began to lose altitude. Over the next six or seven minutes the pilots engaged in a tug of war with their own aircraft, as they struggled to keep the nose level but the flight control system repeatedly pushed it down. In the end the machine won. The airplane plunged into the sea at high speed, killing everyone aboard.

The second crash happened March 8, when Ethiopian Airlines Flight 302 went down six minutes after taking off from Addis Ababa, killing 157. The aircraft was another MAX 8, just two months old. The pilots reported control problems, and data from a satellite tracking service showed sharp fluctuations in altitude. The similarities to the Lion Air crash set off alarm bells: If the same malfunction or design flaw caused both accidents, it might also cause more. Within days, the worldwide fleet of 737 MAX aircraft was grounded. Data recovered since then from the Flight 302 wreckage has reinforced the suspicion that the two accidents are closely related.

The grim fate of Lion Air 610 can be traced in brightly colored squiggles extracted from the flight data recorder. (The chart was published in November in a preliminary report from the Indonesian National Committee on Transportation Safety.)

Lion Air 610 flight data chart 1280

The outline of the story is given in the altitude traces at the bottom of the chart. The initial climb is interrupted by a sharp dip; then a further climb is followed by a long, erratic roller coaster ride. At the end comes the dive, as the aircraft plunges 5,000 feet in a little more than 10 seconds. (Why are there two altitude curves, separated by a few hundred feet? I’ll come back to that question at the end of this long screed.)

All those ups and downs were caused by movements of the horizontal stabilizer, the small winglike control surface at the rear of the fuselage. Stabilizer elevator diagramThe stabilizer controls the airplane’s pitch attitude—nose-up vs. nose-down. On the 737 it does so in two ways. A mechanism for pitch trim tilts the entire stabilizer, whereas push­ing or pulling on the pilot’s control yoke moves the elevator, a hinged tab at the rear of the stabilizer. In either case, moving the trailing edge of the surface upward tends to force the nose of the airplane up, and vice versa. Here we’re mainly concerned with trim changes rather than elevator movements.

Commands to the pitch-trim system and their effect on the airplane are shown in three traces from the flight data, which I reproduce here for convenience:

Lion Air 610 flight data chart trim commands

The line labeled “trim manual” (light blue) reflects the pilots’ inputs, “trim automatic” (orange) shows commands from the airplane’s electronic systems, and “pitch trim position” (dark blue) represents the tilt of the stabilizer, with higher position on the scale denoting a nose-up command. This is where the tug of war between man and machine is clearly evident. In the latter half of the flight, the automatic trim system repeatedly commands nose down, at intervals of roughly 10 seconds. In the breaks between those automated commands, the pilots dial in nose-up trim, using buttons on the control yoke. In response to these conflicting commands, the position of the horizontal stabilizer oscillates with a period of 15 or 20 seconds. The see-sawing motion continues for at least 20 cycles, but toward the end the unrelenting automatic nose-down adjustments prevail over the briefer nose-up commands from the pilots. The stabilizer finally reaches its limiting nose-down deflection and stays there as the airplane plummets into the sea.

Angle of attack

What’s to blame for the perverse behavior of the automatic pitch trim system? The accusatory finger is pointing at something called MCAS, a new feature of the 737 MAX series. MCAS stands for Maneuvering Characteristics Augmentation System—an im­pressively polysyllabic name that tells you nothing about what the thing is or what it does. As I understand it, MCAS is not a piece of hardware; there’s no box labeled MCAS in the airplane’s electronic equipment bays. MCAS consists entirely of software. It’s a program running on a computer.

MCAS has just one function. It is designed to help prevent an aerodynamic stall, a situation in which an airplane has its nose pointed up so high with respect to the surrounding airflow that the wings can’t keep it aloft. A stall is a little like what happens to a bicyclist climbing a hill that keeps getting steeper and steeper: Eventually the rider runs out of oomph, wobbles a bit, and then rolls back to the bottom. Pilots are taught to recover from stalls, but it’s not a skill they routinely practice with a planeful of passengers. In commercial aviation the emphasis is on avoiding stalls—forestalling them, so to speak. Airliners have mechanisms to detect an imminent stall and warn the pilot with lights and horns and a “stick shaker” that vibrates the control yoke. On Flight 610, the captain’s stick was shaking almost from start to finish.

Some aircraft go beyond mere warnings when a stall threatens. If the aircraft’s nose continues to pitch upward, an automated system intervenes to push it back down—if necessary overriding the manual control inputs of the pilot. MCAS is designed to do exactly this. It is armed and ready whenever two criteria are met: The flaps are up (generally true except during takeoff and landing) and the airplane is under manual control (not autopilot). Under these conditions the system is triggered whenever an aerodynamic quantity called angle of attack, or AoA, rises into a dangerous range.

Angle of attack is a concept subtle enough to merit a diagram:Adapted from Lisa R. Le Vie, Review of Research on Angle-of-Attack Indi­cator Effectiveness.

Le Vie angle of attack diagram detail

The various angles at issue are rotations of the aircraft body around the pitch axis, a line parallel to the wings, perpendicular to the fuselage, and passing through the airplane’s center of gravity. If you’re sitting in an exit row, the pitch axis might run right under your seat. Rotation about the pitch axis tilts the nose up or down. Pitch attitude is defined as the angle of the fuselage with respect to a horizontal plane. The flight-path angle is measured between the horizontal plane and the aircraft’s velocity vector, thus showing how steeply it is climbing or descending. Angle of attack is the difference between pitch attitude and flight-path angle. It is the angle at which the aircraft is moving through the surrounding air (assuming the air itself is motionless, i.e., no wind).

AoA affects both lift (the upward force opposing the downward tug of gravity) and drag (the dissipative force opposing forward motion and the thrust of the engines). As AoA increases from zero, lift is enhanced because of air impinging on the underside of the wings and fuselage. For the same reason, however, drag also increases. As the angle of attack grows even steeper, the flow of air over the wings becomes turbulent; beyond that point lift diminishes but drag continues increasing. That’s where the stall sets in. The critical angle for a stall depends on speed, weight, and other factors, but usually it’s no more than 15 degrees.

Neither the Lion Air nor the Ethiopian flight was ever in danger of stalling, so if MCAS was activated, it must have been by mistake. The working hypothesis mentioned in many press accounts is that the system received and acted upon erroneous input from a failed AoA sensor.

A sensor to measure angle of attack is conceptually simple. It’s essentially a weather­vane poking out into the airstream. In the photo below, the angle-of-attack sensor is the small black vane just forward of the “737 MAX” legend. Hinged at the front, the vane rotates to align itself with the local airflow and generates an electrical signal that rep­resents the vane’s angle with respect to the axis of the fuselage. The 737 MAX has two angle-of-attack vanes, one on each side of the nose. (The protruding devices above the AoA vane are pitot tubes, used to measure air speed. Another device below the word MAX is probably a temperature sensor.)

SpiritofRenton nose1280

Angle of attack was not among the variables displayed to the pilots of the Lion Air 737, but the flight data recorder did capture signals derived from the two AoA sensors:

Lion Air 610 flight data chart AoA details

There’s something dreadfully wrong here. The left sensor is indicating an angle of attack about 20 degrees steeper than the right sensor. That’s a huge discrepancy. There’s no plausible way those disparate readings could reflect the true state of the airplane’s motion through the air, with the left side of the nose pointing sky-high and the right side near level. One of the measurements must be wrong, and the higher reading is the suspect one. If the true angle of attack ever reached 20 degrees, the airplane would already be in a deep stall. Unfortunately, on Flight 610 MCAS was taking data only from the left-side AoA sensor. It interpreted the nonsensical measurement as a valid indicator of aircraft attitude, and worked relentlessly to correct it, up to the very moment the airplane hit the sea.

Cockpit automation

The tragedies in Jakarta and Addis Ababa are being framed as a cautionary tale of automation run amok, with computers usurping the authority of pilots. The Washington Post editorialized:

A second fatal airplane accident involving a Boeing 737 MAX 8 may have been a case of man vs. machine…. The debacle shows that regulators should apply extra review to systems that take control away from humans when safety is at stake.

Tom Dieusaert, a Belgian journalist who writes often on aviation and computation, offered this opinion:

What can’t be denied is that the Boeing of Flight JT610 had serious computer problems. And in the hi-tech, fly-by-wire world of aircraft manufacturers, where pilots are reduced to button pushers and passive observers, these accidents are prone to happen more in the future.

The button-pushing pilots are particularly irate. Gregory Travis, who is both a pilot and software developer, summed up his feelings in this acerbic comment:

“Raise the nose, HAL.”

“I’m sorry, Dave, I can’t do that.”

Even Donald Trump tweeted on the issue:

Airplanes are becoming far too complex to fly. Pilots are no longer needed, but rather computer scientists from MIT. I see it all the time in many products. Always seeking to go one unnecessary step further, when often old and simpler is far better. Split second decisions are….

….needed, and the complexity creates danger. All of this for great cost yet very little gain. I don’t know about you, but I don’t want Albert Einstein to be my pilot. I want great flying professionals that are allowed to easily and quickly take control of a plane!

There’s considerable irony in the complaint that the 737 is too automated; in many respects the aircraft is in fact quaintly old-fashioned. The basic design goes back more than 50 years, and even in the latest MAX models quite a lot of 1960s technology survives. The primary flight controls are hydraulic, with a spider web of high-pressure tubing running directly from the control yokes in the cockpit to the ailerons, elevator, and rudder. If the hydraulic systems should fail, there’s a purely mechanical backup, with cables and pulleys to operate the various control surfaces. For stabilizer trim the primary actuator is an electric motor, but again there’s a mechanical fallback, with crank wheels near the pilots’ knees pulling on cables that run all the way back to the tail.

Other aircraft are much more dependent on computers and electronics. The 737′s principal competitor, the Airbus A320, is a thoroughgoing fly-by-wire vehicle. The pilot flies the computer, and the computer flies the airplane. Specifically, the pilot decides where to go—up, down, left, right—but the computer decides how to get there, choosing which control surfaces to deflect and by how much. Boeing’s own more recent designs, the 777 and 787, also rely on digital controls. Indeed, the latest models from both companies go a step beyond fly-by-wire to fly-by-network. Most of the communication from sensors to computers and onward to control surfaces consists of digital packets flowing through a variant of Ethernet. The airplane is a computer peripheral.

Thus if you want to gripe about the dangers and indignities of automation on the flight deck, the 737 is not the most obvious place to start. And a Luddite campaign to smash all the avionics and put pilots back in the seat of their pants would be a dangerously off-target response to the current predicament. There’s no question the 737 MAX has a critical problem. It’s a matter of life and death for those who would fly in it and possibly also for the Boeing Company. But the problem didn’t start with MCAS. It started with earlier decisions that made MCAS necessary. Furthermore, the problem may not end with the remedy that Boeing has proposed—a software update that will hobble MCAS and leave more to the discretion of pilots.

Maxing out the 737

The 737 flew its first passengers in 1968. It was (and still is) the smallest member of the Boeing family of jet airliners, and it is also the most popular by far. More than 10,000 have been sold, and Boeing has orders for another 4,600. Of course there have been changes over the years, especially to engines and instruments. A 1980s update came to be known as 737 Classic, and a 1997 model was called 737 NG, for “next generation.” (Now, with the MAX, the NG has become the previous generation.) Through all these revisions, however, the basic structure of the airframe has hardly changed.

Ten years ago, it looked like the 737 had finally come to the end of its life. Boeing announced it would develop an all-new design as a replacement, with a hull built of lightweight composite materials rather than aluminum. Competitive pressures forced a change of course. Airbus had a head start on the A320neo, an update that would bring more efficient engines to their entry in the same market segment. The revised Airbus would be ready around 2015, whereas Boeing’s clean-slate project would take a decade. Customers were threatening to defect. In particular, American Airlines—long a Boeing loyalist—was negotiating a large order of A320neos.

In 2011 Boeing scrapped the plan for an all-new design and elected to do the same thing Airbus was doing: bolt new engines onto an old airframe. This would eliminate most of the up-front design work, as well as the need to build tooling and manufacturing facilities. Testing and certification by the FAA would also go quicker, so that the first deliveries might be made in five or six years, not too far behind Airbus.

A 737-800 (a pre-MAX model) burns about 800 gallons of jet fuel per hour aloft. That comes to $2,000 at $2.50 per gallon. If the airplane flies 10 hours a day, the annual fuel bill is $7.3 million. Fourteen percent of that is just over $1 million.The new engines mated to the 737 promised a 14 percent gain in fuel efficiency, which might save an airline a million dollars a year in operating costs. The better fuel economy would also increase the airplane’s range. And to sweeten the deal Boeing proposed to keep enough of the airframe unchanged that the new model would operate under the same “type certificate” as the old one. A pilot qualified to fly the 737 NG could step into the MAX without extensive retraining.

737 200 and 737 MAX comparedSources: (left) Bryan via Wikimedia, CC BY 2.0; (right) Steve Lynes via Wikimedia, CC BY 2.0.

The original 1960s 737 had two cigar-shaped engines, long and skinny, tucked up under the wings (left photo above). Since then, jet engines have grown fat and stubby. They derive much of their thrust not from the jet exhaust coming out of the tailpipe but from “bypass” air moved by a large-diameter fan. Such engines would scrape on the ground if they were mounted under the wings of the 737; instead they are perched on pylons that extend forward from the leading edge of the wing. The engines on the MAX models (right photo) are the fattest yet, with a fan 69 inches in diameter. Compared with the NG series, the MAX engines are pushed a few inches farther forward and hang a few inches lower.

A New York Times article by David Gelles, Natalie Kitroeff, Jack Nicas, and Rebecca R. Ruiz describes the plane’s development as hurried and hectic.

Months behind Airbus, Boeing had to play catch-up. The pace of the work on the 737 Max was frenetic, according to current and former employees who spoke with The New York Times…. Engineers were pushed to submit technical drawings and designs at roughly double the normal pace, former employees said.

The Times article also notes: “Although the project had been hectic, current and former employees said they had finished it feeling confident in the safety of the plane.”

Pitch instability

Sometime during the development of the MAX series, Boeing got an unpleasant surprise. The new engines were causing unwanted pitch-up movements under certain flight con­ditions. When I first read about this problem, soon after the Lion Air crash, I found the following explanation is an article by Sean Broderick and Guy Norris in Aviation Week and Space Technology (Nov. 26–Dec. 9, 2018, pp. 56–57):

Like all turbofan-powered airliners in which the thrust lines of the engines pass below the center of gravity (CG), any change in thrust on the 737 will result in a change of flight path angle caused by the vertical component of thrust.

In other words, the low-slung engines not only push the airplane forward but also tend to twirl it around the pitch axis. It’s like a motorcycle doing wheelies. Because the MAX engines are mounted farther below and in front of the center of gravity, they act through a longer lever arm and cause more severe pitch-up motions.

I found more detail on this effect in an earlier Aviation Week article, a 2017 pilot report by Fred George, describing his first flight at the controls of the new MAX 8.

The aircraft has sufficient natural speed stability through much of its flight envelope. But with as much as 58,000 lb. of thrust available from engines mounted well below the center of gravity, there is pronounced thrust-versus-pitch coupling at low speeds, especially with aft center of gravity (CG) and at light gross weights. Boeing equips the aircraft with a speed-stability augmen­tation function that helps to compensate for the coupling by automatically trimming the horizontal stabilizer according to indicated speed, thrust lever position and CG. Pilots still must be aware of the effect of thrust changes on pitching moment and make purposeful control-wheel and pitch-trim inputs to counter it.

The reference to an “augmentation function” that works by “automatically trimming the horizontal stabilizer” sounded awfully familiar, but it turns out this is not MCAS. The system that compensates for thrust-pitch coupling is known as speed-trim. Like MCAS, it works “behind the pilot’s back,” making adjustments to control surfaces that were not directly commanded. There’s yet another system of this kind called mach-trim that silently corrects a different pitch anomally when the aircraft reaches transonic speeds, at about mach 0.6. Neither of these systems is new to the MAX series of aircraft; they have been part of the control algorithm at least since the NG came out in 1997. MCAS runs on the same computer as speed-trim and mach-trim and is part of the same software system, but it is a distinct function. And according to what I’ve been reading in the past few weeks, it addresses a different problem—one that seems more sinister.

Most aircraft have the pleasant property of static stability. When an airplane is properly trimmed for level flight, you can let go of the controls—at least briefly—and it will continue on a stable path. Moreover, if you pull back on the control yoke to point the nose up, then let go again, the pitch angle should return to neutral. The layout of the airplane’s various airfoil surfaces accounts for this behavior. When the nose goes up, the tail goes down, pushing the underside of the horizontal stabilizer into the airstream. The pressure of the air against this tail surface provides a restoring force that brings the tail back up and the nose back down. (That’s why it’s called a stabilizer!) This negative feedback loop is built in to the structure of the airplane, so that any departure from equilibrium creates a force that opposes the disturbance.

Pitch stability

However, the tail surface, with its helpful stablizing influence, is not the only structure that affects the balance of aerodynamic forces. Jet engines are not designed to contribute lift to the airplane, but at high angles of attack they can do so, as the airstream impinges on the lower surface of each engine’s outer covering, or nacelle. When the engines are well forward of the center of gravity, the lift creates a pitch-up turning moment. If this moment exceeds the counterbalancing force from the tail, the aircraft is unstable. A nose-up attitude generates forces that raise the nose still higher, and positive feedback takes over.

Is the 737 MAX vulnerable to such runaway pitch excursions? The possibility had not occurred to me until I read a commentary on MCAS on the Boeing 737 Technical Site, a web publication produced by Chris Brady, a former 737 pilot and flight instructor. He writes:

MCAS is a longitudinal stability enhancement. It is not for stall prevention or to make the MAX handle like the NG; it was introduced to counteract the non-linear lift of the LEAP-1B engine nacelles and give a steady increase in stick force as AoA increases. The LEAP engines are both larger and relocated slightly up and forward from the previous NG CFM56-7 engines to accommodate their larger fan diameter. This new location and size of the nacelle cause the vortex flow off the nacelle body to produce lift at high AoA; as the nacelle is ahead of the CofG this lift causes a slight pitch-up effect (ie a reducing stick force) which could lead the pilot to further increase the back pressure on the yoke and send the aircraft closer towards the stall. This non-linear/reducing stick force is not allowable underFAR = Federal Air Regulations. Part 25 deals with airworthiness standards for transport category airplanes. FAR §25.173 “Static longitudinal stability”. MCAS was therefore introduced to give an automatic nose down stabilizer input during steep turns with elevated load factors (high AoA) and during flaps up flight at airspeeds approaching stall.

Brady cites no sources for this statement, and as far as I know Boeing has neither confirmed nor denied. But Aviation Week, which earlier mentioned the thrust-pitch linkage, has more recently (issue of March 20) gotten behind the nacelle-lift instability hypothesis:

The MAX’s larger CFM Leap 1 engines create more lift at high AOA and give the aircraft a greater pitch-up moment than the CFM56-7-equipped NG. The MCAS was added as a certification requirement to minimize the handling difference between the MAX and NG.

Assuming the Brady account is correct, an interesting question is when Boeing noticed the instability. Were the designers aware of this hazard from the outset? Did it emerge during early computer simulations, or in wind tunnel testing of scale models? A story by Dominic Gates in the Seattle Times hints that Boeing may not have recognized the severity of the problem until flight tests of the first completed aircraft began in 2015.

According to Gates, the safety analysis that Boeing submitted to the FAA specified that MCAS would be allowed to move the horizontal stabilizer by no more than 0.6 degree. In the airplane ultimately released to the market, MCAS can go as far as 2.5 degrees, and it can act repeatedly until reaching the mechanical limit of motion at about 5 degrees. Gates writes:

That limit was later increased after flight tests showed that a more powerful movement of the tail was required to avert a high-speed stall, when the plane is in danger of losing lift and spiraling down.

The behavior of a plane in a high angle-of-attack stall is difficult to model in advance purely by analysis and so, as test pilots work through stall-recovery routines during flight tests on a new airplane, it’s not uncommon to tweak the control software to refine the jet’s performance.

The high-AoA instability of the MAX appears to be a property of the aerodynamic form of the entire aircraft, and so a direct way to suppress it would be to alter that form. For example, enlarging the tail surface might restore static stability. But such airframe modifications would have delayed the delivery of the airplane, especially if the need for them was discovered only after the first prototypes were already flying. Structural changes might also jeopardize inclusion of the new model under the old type certificate. Modifying software instead of aluminum must have looked like an attractive alternative. Someday, perhaps, we’ll learn how the decision was made.

By the way, according to Gates, the safety document filed with the FAA specifying a 0.6 degree limit has yet to be amended to reflect the true range of MCAS commands.

Flying while unstable

Instability is not necessarily the kiss of death in an airplane. There have been at least a few successful unstable designs, starting with the 1903 Wright Flyer. The Wright brothers deliberately put the horizontal stabilizer in front of the wing rather than behind it because their earlier experiments with kites and gliders had shown that what we call stability can also be described as sluggishness. The Flyer’s forward control surfaces (known as canards) tended to amplify any slight nose-up or nose-down motions. Maintaining a steady pitch attitude demanded high alertness from the pilot, but it also allowed the airplane to respond more quickly when the pilot wanted to pitch up or down. (The pros and cons of the design are reviewed in a 1984 paper by Fred E. C. Culick and Henry R. Jex.)

Wright First Flight 1903Dec17Orville at the controls, Wilbur running alongside, at Kitty Hawk on December 17, 1903. In this view we are seeing the airplane from the stern. The canards—dual adjustable horizontal surfaces at the front—seem to be calling for nose-up pitch. (Photo from WikiMedia.

Another dramatically unstable aircraft was the Grumman X-29, a research platform designed in the 1980s. The X-29 had its wings on backwards; to make matters worse,X 29 at High Angle of Attack with Smoke Generators the primary surfaces for pitch control were canards mounted in front of the wings, as in the Wright Flyer. The aim of this quirky project was to explore designs with exceptional agility, sacrificing static stability for tighter maneuvering. No unaided human pilot could have mastered such a twitchy vehicle. It required a digital fly-by-wire system that sampled the state of the airplane and adjusted the control surfaces up to 80 times per second. The controller was successful—perhaps too much so. It allowed the airplane to be flown safely, but in taming the instability it also left the plane with rather tame handling characteristics.

I have a glancing personal connection with the X-29 project. In the 1980s I briefly worked as an editor with members of the group at Honeywell who designed and built the X-29 control system. I helped prepare publications on the control laws and on their implementation in hardware and software. That experience taught me just enough to recognize something odd about MCAS: It is way too slow to be suppressing aerodynamic instability in a jet aircraft. Whereas the X-29 controller had a response time of 25 milliseconds, MCAS takes 10 seconds to move the 737 stabilizer through a 2.5-degree adjustment. At that pace, it cannot possibly keep up with forces that tend to flip the nose upward in a positive feedback loop.

There’s a simple explanation. MCAS is not meant to control an unstable aircraft. It is meant to restrain the aircraft from entering the regime where it becomes unstable. This is the same strategy used by other mechanisms of stall prevention—intervening before the angle of attack reaches the critical point. However, if Brady is correct about the instability of the 737 MAX, the task is more urgent for MCAS. Instability implies a steep and slippery slope. MCAS is a guard rail that bounces you back onto the road when you’re about to drive over the cliff.

Which brings up the question of Boeing’s announced plan to fix the MCAS problem. Reportedly, the revised system will not keep reactivating itself so persistently, and it will automatically disengage if it detects a large difference between the two AoA sensors. These changes should prevent a recurrence of the recent crashes. But do they provide adequate protection against the kind of mishap that MCAS was designed to prevent in the first place? With MCAS shut down, either manually or automatically, there’s nothing to stop an unwary or misguided pilot from wandering into the corner of the flight envelope where the MAX becomes unstable.

Without further information from Boeing, there’s no telling how severe the instability might be—if indeed it exists at all. The Brady article at the Boeing 737 Technical Site implies the problem is partly pilot-induced. Normally, to make the nose go higher and higher you have to pull harder and harder on the control yoke. In the unstable region, however, the resistance to pulling suddenly fades, and so the pilot may unwittingly pull the yoke to a more extreme position.

Is this human interaction a necessary part of the instability, or is it just an exacer­bating factor? In other words, without the pilot in the loop, would there still be positive feedback causing runaway nose-up pitch? I have yet to find answers.

Another question: If the root of the problem is a deceptive change in the force resisting a nose-up movements of the control yoke, why not address that issue directly? The elevator feel computer and the elevator feel and centering unit pro­vide “fake” forces to the pilot’s control yoke. Figure borrowed from B737 NG Flight controls, a presentation by theoryce. The presentation is for the 737 NG, not the MAX series; it’s possible the architecture has changed.Pitch controls diagramIn the 737 (and most other large aircraft) the forces that the pilot “feels” through the control yoke are not simple reflections of the aerodynamic forces acting on the elevator and other control surfaces. The feedback forces are largely synthetic, generated by an “elevator feel computer” and an “elevator feel and centering unit,” devices that monitor the state of the airplane and gen­erate appro­priate hydraulic pressures push­ing the yoke one way or another. Those systems could have been given the addi­tional task of maintaining or increasing back force on the yoke when the angle of attack approaches the instability. Artificially en­hanced resis­tance is already part of the stall warning system. Why not extend it to MCAS? (There may be a good answer; I just don’t know it.)

Where’s the off switch?

Even after the spurious activation of MCAS on Lion Air 610, the crash and the casualties would have been avoided if the pilots had simply turned the damn thing off. Why didn’t they? Apparently because they had never heard of MCAS, and didn’t know it was installed on the airplane they were flying, and had not received any instruction on how to disable it. There’s no switch or knob in the cockpit labeled “MCAS ON/OFF.” The Flight Crew Operation Manual does not mention it (except in a list of abbreviations), and neither did the transitional training program the pilots had completed before switching from the 737 NG to the MAX. The training consisted of either one or two hours (reports differ) with an iPad app.

Boeing’s explanation of these omissions was captured in a Wall Street Journal story:

One high-ranking Boeing official said the company had decided against disclos­ing more details to cockpit crews due to concerns about inundating average pilots with too much information—and significantly more technical data—than they needed or could digest.

To call this statement disingenuous would be disingenuous. What it is is preposterous. In the first place, Boeing did not withhold “more details”; they failed to mention the very existence of MCAS. And the too-much-information argument is silly. I don’t have access to the Flight Crew Operation Manual for the MAX, but the NG edition runs to more than 1,300 pages, plus another 800 for the Quick Reference Handbook. A few paragraphs on MCAS would not have sunk any pilot who wasn’t already drowning in TMI. Moreover, the manual carefully documents the speed-trim and mach-trim features, which seem to fall in the same category as MCAS: They act autonomously, and offer the pilot no direct interface for monitoring or adjusting them.

In the aftermath of the Lion Air accident, Boeing stated that the procedure for disabling MCAS was spelled out in the manual, even though MCAS itself wasn’t mentioned. That procedure is given in a checklist for “runaway stabilizer trim.” It is not complicated: Hang onto the control yoke, switch off the autopilot and autothrottles if they’re on; then, if the problem persists, flip two switches labeled “STAB TRIM” to the “CUTOUT” position. Only the last step will actually matter in the case of an MCAS malfunction.

This checklist is considered a “memory item”; pilots must be able to execute the steps without looking it up in the handbook. The Lion Air crew should certainly have been familiar with it. But could they recognize that it was the right checklist to apply in an airplane whose behavior was unlike anything they had seen in their training or previous 737 flying experience? According to the handbook, the condition that triggers use of the runaway checklist is “Uncommanded stabilizer trim movement occurs continuously.” The MCAS commands were not continuous but repetitive, so some leap of inference would have been needed to make this diagnosis.

Center console trim wheels

By the time of the Ethiopian crash, 737 pilots everywhere knew all about MCAS and the procedure for disabling it. A preliminary report issued last week by Ethiopian Airlines indicates that after a few minutes of wrestling with the control yoke, the pilots on Flight 302 did invoke the checklist procedure, and moved the STAB TRIM switches to CUTOUT. The stabilizer then stopped responding to MCAS nose-down commands, but the pilots were unable to regain control of the airplane.

It’s not entirely clear why they failed or what was going on in the cockpit in those last minutes. One factor may be that the cutout switch disables not only automatic pitch trim movements but also manual ones requested through the buttons on the control yoke. The switch cuts all power to the electric motor that moves the stabilizer. In this situation the only way to adjust the trim is to turn the hand crank wheels near the pilots’ knees. During the crisis on Flight 302 that mechanism may have been too slow to correct the trim in time, or the pilots may have been so fixated on pulling the control yoke back with maximum force that they did not try the manual wheels. It’s also possible that they flipped the switches back to the NORMAL setting, restoring power to the stabilizer motor. The report’s narrative doesn’t mention this possibility, but the graph from the flight data recorder suggests it (see below).

The single point of failure

There’s room for debate on whether the MCAS system is a good idea when it is operating correctly, but when it activates mistakenly and sends an airplane diving into the sea, no one would defend it. By all appearances, the rogue behavior in both the Lion Air and the Ethiopian accidents was triggered by a malfunction in a single sensor. That’s not supposed to happen in aviation. It’s unfathomable that any aircraft manufacturer would knowingly build a vehicle in which the failure of a single part would lead to a fatal accident.

Protection against single failures comes from redundancy, and the 737 is so committed to this principle that it almost amounts to two airplanes wrapped up in a single skin. Aircraft that rely more heavily on automation generally have three of everything—sensors, computers, and actuators.The cockpit has stations for two pilots, who look at separate sets of instruments and operate separate sets of controls. The left and right instrument panels receive signals from separate sets of sensors, and those signals are processed by separate computers. Each side of the cockpit has its own inertial guidance system, its own navigation computer, its own autopilot. There are two electric power supplies and two hydraulic systems—plus mechanical backups in case of a dual hydraulic failure. The two control yokes normally move in unison—they are linked under the floor—but if one yoke should get stuck, the connection can be broken, allowing the other pilot to continue flying the airplane.

There’s one asterisk in this roster of redundancy: A device called the flight control computer, or FCC, apparently gets special treatment. There are two FCCs, but according to the Boeing 737 Technical Site only one of them operates during any given flight. All the other duplicated components run in parallel, receiving independent inputs, doing independent computations, emitting independent control actions. But for each flight just one FCC does all the work, and the other is put on standby. The scheme for choosing the active computer seems strangely arbitrary. Each day when the airplane is powered up, the left side FCC gets control for the first flight, then the right side unit takes over for the second flight of the day, and the two sides alternate until the power is shut off. After a restart, the alternation begins again with the left FCC.

Aspects of this scheme puzzle me. I don’t understand why redundant FCC units are treated differently from other components. If one FCC dies, does the other automatically take over? Can the pilots switch between them in flight? If so, would that be an effective way to combat MCAS misbehavior? I’ve tried to find answers in the manuals, but I don’t trust my interpretation of what I read.

I’ve also had a hard time learning anything about the FCC itself. I don’t know who makes it, or what it looks like, or how it is programmed. Honeywell 737 Flight Control Computer On a website called Closet Wonderfuls an item identified as a 737 flight control computer is on offer for $43.82, with free shipping.A website called Airframer lists many suppliers of parts and materials for the 737, but there’s no entry for a flight control computer. It has a Honeywell label. I’m tempted, but I’m pretty sure this is not the unit installed in the latest MAX models. I’ve learned that the FCC was once the FCE, for flight control elec­tronics, suggesting it was an analog device, doing its integrations and differ­entiations with capacitors and resis­tors. By now I’m sure the FCC has caught up with the digital age, but it might still be special-purpose, custom-built hardware. Or it might be an off-the-shelf Intel CPU in a fancy box, maybe even running Linux or Windows. I just don’t know.

In the context of the MAX crashes, the flight control computer is important for two reasons. First, it’s where MCAS lives; this is the computer on which the MCAS software runs. Second, the curious procedure for choosing a different FCC on alternating flights also winds up choosing which AoA sensor is providing input to MCAS. The left and right sensors are connected to the corresponding FCCs.

If the two FCCs are used in alternation, that raises an interesting question about the history of the aircraft that crashed in Indonesia. The preliminary crash report describes trouble with various instruments and controls on five flights over four days (including the fatal flight). All of the problems were on the left side of the aircraft or involved a dis­agreement between the left and right sides.
The flight in the gray row is not mentioned in the preliminary report, but the airplane had to get from Manado to Denpasar for the following day’s flight.

date route trouble reports maintenance
Oct 26 Tianjin → Manado left side: no airspeed
or altitude indications
test left Stall Management and
Yaw Damper computer; passed
? Manado → Denpasar ? ?
Oct 27 Denpasar → Manado left side: no airspeed
or altitude indications

speed trim and mach trim
warning lights
test left Stall Management and
Yaw Damper computer; failed

reset left Air Data and Inertial
Reference Unit

retest left Stall Management and
Yaw Damper computer; passed

clean electrical connections
Oct 27 Manado → Denpasar left side: no airspeed
or altitude indications

speed trim and mach trim
warning lights

autothrottle disconnect
test left Stall Management and
Yaw Damper computer; failed

reset left Air Data and Inertial
Reference Unit

replace left AoA sensor
Oct 28 Denpasar → Jakarta left/right disagree warning
on airspeed and altitude

stick shaker

[MCAS activation]
flush left pitot tube
and static port

clean electrical connectors
on elevator “feel” computer
Oct 29 Jakarta → Pangkal Pinang stick shaker

[MCAS activation]

Which of the five flights had the left-side FCC as active computer? The final two flights (red), where MCAS activated, were both first-of-the-day flights and so presumably under control of the left FCC. For the rest it’s hard to tell, especially since maintenance operations may have entailed full shutdowns of the aircraft, which would have reset the alternation sequence.

The revised MCAS software will reportedly consult signals from both AoA sensors. What will it do with the additional information? Only one clue has been published so far: If the readings differ by more than 5.5 degrees, MCAS will shut down. What if the readings differ by 4 or 5 degrees? A recent paper by Daniel Ossmann of the German Aerospace Center dis­cusses algorithmic detection of fail­ures in AoA sensors.Which sensor will MCAS choose to believe? Conservative (or pessimistic) engineering practice would seem to favor the higher reading, in order to provide better protection against instability and a stall. But that choice also raises the risk of dangerous “corrections” mandated by a faulty sensor.

The present MCAS system, with its alternating choice of left and right, has a 50 percent chance of disaster when a single random failure causes an AoA sensor to spew out falsely high data. With the same one-sided random failure, the updated MCAS will have a 100 percent chance of ignoring a pilot’s excursion into stall territory. Is that an improvement?

The broken sensor

Although a faulty sensor should not bring down an airplane, I would still like to know what went wrong with the AoA vane.

It’s no surprise that AoA sensors can fail. They are mechanical devices operating in a harsh environment: winds exceeding 500 miles per hour and temperatures below –40. Lion Air 610 flight data chart AoA detailA common failure mode is a stuck vane, often caused by ice (despite a built-in de-icing heater). But a seized vane would produce a constant output, regardless of the real angle of attack, which is not the symptom seen in Flight 610. The flight data recorder shows small fluctuations in the signals from both the left and the right instruments. Furthermore, the jiggles in the two curves are closely aligned, suggesting they are both tracking the same movements of the aircraft. In other words, the left-hand sensor appears to be functioning; it’s just giving measurements offset by a constant deviation of roughly 20 degrees.

Is there some other failure mode that might produce the observed offset? Sure: Just bend the vane by 20 degrees. Maybe a catering truck or an airport jetway blundered into it. Another creative thought is that the sensor might have been installed wrong, with the entire unit rotated by 20 degrees. Several writers on a website called the Professional Pilots Rumour Network explored this possibility, but they ultimately concluded it was impossible. The manufacturer, doubtless aware of the risk, placed the mounting screws and locator pins asymmetrically, so the unit will only go into the hull opening one way.

You might get the same effect through an assembly error during the manufacture of the sensor. The vane could be incorrectly attached to the shaft, or else the internal transducer that converts angular position into an electrical signal might be mounted wrong. Did the designers also ensure that such mistakes are impossible? I don’t know; I haven’t been able to find any drawings or photographs of the sensor’s innards.

Looking for other ideas about what might have gone wrong, I made a quick, scattershot survey of FAA airworthiness directives that call for servicing or replacing AoA sensors. I found dozens of them, including several that discuss the same sensor installed on the 737 MAX (the Rosemount 0861). But none of the reports I read describes a malfunction that could cause a consistent 20-degree error.

For a while I thought that the fault might lie not in the sensor itself but farther along the data path. It could be something as simple as a bad cable or connector. Signals from the AoA sensor go to the Air Data and Inertial Reference Unit (ADIRU), where the sine and cosine components are combined and digitized to yield a number representing the measured angle of attack. The ADIRU also receives inputs from other sensors, including the pitot tubes for measuring airspeed and the static ports for air pressure. And it houses the gyroscopes and accelerometers of an inertial guidance system, which can keep track of aircraft motion without reference to external cues. (There’s a separate ADIRU for each side of the airplane.) Maybe there was a problem with the digitizer—a stuck bit rather than a stuck vane.

Further information has undermined this idea. For one thing, the AoA sensor removed by the Lion Air maintenance crew on October 27 is now in the hands of investigators. According to news reports, it was “deemed to be defective,” though I’ve heard no hint of what the defect might be. Also, it turns out that one element of the control system, the Stall Management and Yaw Damper (SMYD) computer, receives the raw sine and cosine voltages directly from the sensor, not a digitized angle calculated by the ADIRU. It is the SMYD that controls the stick-shaker function. On both the Lion Air and the Ethiopian flights the stick shaker was active almost continuously, so those undigitized sine and cosine voltages must have been indicating a high angle of attack. In other words the error already existed before the signals reached the ADIRU.

I’m still stumped by the fixed angular offset in the Lion Air data, but the question now seems a little less important. The release of the preliminary report on Ethiopian Flight 302 shows that the left-side AoA sensor on that aircraft also failed badly, but in a way that looks totally different. Here are the relevant traces from the flight data recorder:

Ethiopian 302 FDR AoA

The readings from the AoA sensors are the uppermost lines, red for the left sensor and blue for the right. At the left edge of the graph they differ somewhat when the airplane has just begun to move, but they fall into close coincidence once the roll down the runway has built up some speed. At takeoff, however, they suddenly diverge dramtically, as the left vane begins reading an utterly implausible 75 degrees nose up. Later it comes down a few degrees but otherwise shows no sign of the ripples that would suggest a response to airflow. At the very end of the flight there are some more unexplained excursions.

By the way, in this graph the light blue trace of automatic trim commands offers another clue to what might have happened in the last moments of Flight 302. Around the middle of the graph, the STAB TRIM switches were pulled, with the result that an automatic nose-down command had no effect on the stabilizer position. But at the far right, another automatic nose-down command does register in the trim-position trace, suggesting that the cutout switches may have been turned on again.

Still more stumpers

There’s so much I still don’t understand.

Puzzle 1. If the Lion Air and Ethiopian accidents were both caused by faulty AoA sensors, then there were three parts with similar defects in brand new aircraft (including the replacement sensor installed by Lion Air on October 27). A recent news item says the replacement was not a new part but one that had been refurbished by a Florida shop called XTRA Aerospace. This fact offers us somewhere else to point the accusatory finger, but presumably the two sensors installed by Boeing were not retreads, so XTRA can’t be blamed for all of them.

There are roughly 400 MAX aircraft in service, with 800 AoA sensors. Is a failure rate of 3 out of 800 unusual or unacceptable? Does that judgment depend on whether or not it’s the same defect in all three cases?

Puzzle 2. Let’s look again at the traces for pitch trim and angle of attack in the Lion Air 610 data. The conflicting manual and automatic commands in the second half of the flight have gotten lots of attention, but I’m also baffled by what was going on in the first few minutes.

Lion Air 610 flight data chart  trim and AoA detail

During the roll down the runway, the pitch trim system was set near its maximum pitch-up position (dark blue line). Immediately after takeoff, the automatic trim system began calling for further pitch-up movement, and the stabilizer probably reached its mechanical limit. At that point the pilots manually trimmed it in the pitch-down direction, and the automatic system replied with a rapid sequence of up adjustments. In other words, there was already a tug-of-war underway, but the pilots and the automated controls were pulling in directions opposite to those they would choose later on. All this happened while the flaps were still deployed, which means that MCAS could not have been active. Some other element of the control system must have been issuing those automatic pitch-up orders. Deepening the mystery, the left side AoA sensor was already feeding its spurious high readings to the left-side flight control computer. If the FCC was acting on that data, it should not have been commanding nose-up trim.

Puzzle 3. The AoA readings are not the only peculiar data in the chart from the Lion Air preliminary report. Here are the altitude and speed traces:

Lion Air 610 flight data chart alt and ias details

The left-side altitude readings (red) are low by at least a few hundred feet. The error looks like it might be multiplicative rather than additive, perhaps 10 percent. The left and right computed airspeeds also disagree, although the chart is too squished to allow a quantitative comparison. It was these discrepancies that initially upset the pilots of Flight 610; they could see them on their instruments. (They had no angle of attack indicators in the cockpit, so that conflict was invisible to them.)

Altitude, airspeed, and angle of attack are all measured by different sensors. Could they all have gone haywire at the same time? Or is there some common point of failure that might explain all the weird behavior? In particular, is it possible a single wonky AoA sensor caused all of this havoc? My guess is yes. The sensors for altitude and airspeed and even temperature are influenced by angle of attack. The measured speed and pressure are therefore adjusted to compensate for this confounding variable, using the output of the AoA sensor. That output was wrong, and so the adjustments allowed one bad data stream to infect all of the air data measurements.

Man or machine

Six months ago, I was writing about another disaster caused by an out-of-control control system. In that case the trouble spot was a natural gas distribution network in Massa­chusetts, where a misconfigured pressure-regulating station caused fires and explosions in more than 100 buildings, with one fatality and 20 serious injuries. I lamented: “The special pathos of technological tragedies is that the engines of our destruction are machines that we ourselves design and build.”

In a world where defective automatic controls are blowing up houses and dropping aircraft out of the sky, it’s hard to argue for more automation, for adding further layers of complexity to control systems, for endowing machines with greater autonomy. Public sentiment leans the other way. Like President Trump, most of us trust pilots more than we trust computer scientists. We don’t want MCAS on the flight deck. We want Chesley Sullenberger III, the hero of USAir Flight 1549, who guided his crippled A320 to a dead-stick landing in the Hudson River and saved all 155 souls on board. No amount of cockpit automation could have pulled off that feat.

Nevertheless, a cold, analytical view of the statistics suggests a different reaction. The human touch doesn’t always save the day. On the contrary, pilot error is responsible for more fatal crashes than any other cause. One survey lists pilot error as the initiating event in 40 percent of fatal accidents, with equipment failure accounting for 23 percent. No one is (yet) advocating a pilotless cockpit, but at this point in the history of aviation technology that’s a nearer prospect than a computer-free cockpit.

The MCAS system of the 737 MAX represents a particularly awkward compromise between fully manual and fully automatic control. The software is given a large measure of responsibility for flight safety and is even allowed to override the decisions of the pilot. And yet when the system malfunctions, it’s entirely up to the pilot to figure out what went wrong and how to fix it—and the fix had better be quick, before MCAS can drive the plane into the ground.

Two lost aircraft and 346 deaths are strong evidence that this design was not a good idea. But what to do about it? Boeing’s plan is a retreat from automatic control, returning more responsibility and authority to the pilots:

  • Flight control system will now compare inputs from both AOA sensors. If the sensors disagree by 5.5 degrees or more with the flaps retracted, MCAS will not activate. An indicator on the flight deck display will alert the pilots.
  • If MCAS is activated in non-normal conditions, it will only provide one input for each elevated AOA event. There are no known or envisioned failure conditions where MCAS will provide multiple inputs.
  • MCAS can never command more stabilizer input than can be counter­acted by the flight crew pulling back on the column. The pilots will continue to always have the ability to override MCAS and manually control the airplane.

A statement from Dennis Muilenburg, Boeing’s CEO, says the software update “will ensure accidents like that of Lion Air Flight 610 and Ethiopian Airlines Flight 302 never happen again.” I hope that’s true, but what about the accidents that MCAS was designed to prevent? I also hope we will not be reading about a 737 MAX that stalled and crashed because the pilots, believing MCAS was misbehaving, kept hauling back on the control yokes.

If Boeing were to take the opposite approach—not curtailing MCAS but enhancing it with still more algorithms that fiddle with the flight controls—the plan would be greeted with hoots of outrage and derision. Indeed, it seems like a terrible idea. MCAS was installed to prevent pilots from wandering into hazardous territory. A new supervisory system would keep an eye on MCAS, stepping in if it began acting suspiciously. Wouldn’t we then need another custodian to guard the custodians, ad infinitum? Moreoever, with each extra layer of complexity we get new side effects and unintended consequences and opportunities for something to break. The system becomes harder to test, and impossible to prove correct.

Those are serious objections, but the problem being addressed is also serious.

Suppose the 737 MAX didn’t have MCAS but did have a cockpit indicator of angle of attack. On the Lion Air flight, the captain would have felt the stick-shaker warning him of an incipient stall and would have seen an alarmingly high angle of attack on his instrument panel. His training would have impelled him to do the same thing MCAS did: Push the nose down to get the wings working again. Would he have continued pushing it down until the plane crashed? Surely not. He would have looked out the window, he would have cross-checked the instruments on the other side of the cockpit, and after some scary moments he would have realized it was a false alarm. (In darkness or low visibility, where the pilot can lose track of the horizon, the outcome might be worse.)

I see two lessons in this hypothetical exercise. First, erroneous sensor data is dangerous, whether the airplane is being flown by a computer or by Chesley Sullenberger. A prudently designed instrument and control system would take steps to detect (and ideally correct) such errors. At the moment, redundancy is the only defense against these failures—and in the unpatched version of MCAS even that protection is compromised. It’s not enough. One key to the superiority of human pilots is that they exercise judgment and sometimes skepticism about what the instruments tell them. That kind of reasoning is not beyond the reach of automated systems. There’s plenty of information to be exploited. For example, inconsistencies between AoA sensors, pitot tubes, static pressure ports, and air temperature probes not only signal that something’s wrong but can offer clues about which sensor has failed. The inertial reference unit provides an independent check on aircraft attitude; even GPS signals might be brought to bear. Admittedly, making sense of all this data and drawing a valid conclusion from it—a problem known as sensor fusion—is a major challenge.

Second, a closed-loop controller has yet another source of information: an implicit model of the system being controlled. If you change the angle of the horizontal stabilizer, the state of the airplane is expected to change in known ways—in angle of attack, pitch angle, airspeed, altitude, and in the rate of change in all these parameters. If the result of the control action is not consistent with the model, something’s not right. To persist in issuing the same commands when they don’t produce the expected results is not reasonable behavior. Autopilots include rules to deal with such situations; the lower-level control laws that run in manual-mode flight could incorporate such sanity checks as well.

I don’t claim to have the answer to the MCAS problem. And I don’t want to fly in an airplane I designed. (Neither do you.) But there’s a general principle here that I believe should be taken to heart: If an autonomous system makes life-or-death decisions based on sensor data, it ought to verify the validity of the data.

Update 2019-04-11

Boeing continues to insist that MCAS is “not a stall-protection function and not a stall-prevention function. It is a handling-qualities function. There’s a misconception it is something other than that.” This statement comes from Mike Sinnett, who is vice president of product development and future airplane development at Boeing; it appears in an Aviation Week article by Guy Norris published online April 9.

I don’t know exactly what “handling qualities” means in this context. To me the phrase connotes something that might affect comfort or aesthetics or pleasure more than safety. An airplane with different handling qualities would feel different to the pilot but could still be flown without risk of serious mishap. Is Sinnett implying something along those lines? If so—if MCAS is not critical to the safety of flight—I’m surprised that Boeing wouldn’t simply disable it temporarily, as a way of getting the fleet back in the air while they work out a permanent solution.

The Norris article also quote Sinnett as saying: “The thing you are trying to avoid is a situation where you are pulling back and all of a sudden it gets easier, and you wind up overshooting and making the nose higher than you want it to be.” That situation, with the nose higher than you want it to be, sounds to me like an airplane that might be approaching a stall.

A story by Jack Nicas, David Gelles, and James Glanz in today’s New York Times offers a quite different account, suggesting that “handling qualities” may have motivated the first version of MCAS, but stall risks were part of the rationale for later beefing it up.

The system was initially designed to engage only in rare circumstances, namely high-speed maneuvers, in order to make the plane handle more smoothly and predictably for pilots used to flying older 737s, according to two former Boeing employees who spoke on the condition of anonymity because of the open investigations.

For those situations, MCAS was limited to moving the stabilizer—the part of the plane that changes the vertical direction of the jet—about 0.6 degrees in about 10 seconds.

It was around that design stage that the F.A.A. reviewed the initial MCAS design. The planes hadn’t yet gone through their first test flights.

After the test flights began in early 2016, Boeing pilots found that just before a stall at various speeds, the Max handled less predictably than they wanted. So they suggested using MCAS for those scenarios, too, according to one former employee with direct knowledge of the conversations

Finally, another Aviation Week story by Guy Norris, published yesterday, gives a convincing account of what happened to the angle of attack sensor on Ethiopian Airlines Flight 302. According to Norris’s sources, the AoA vane was sheared off moments after takeoff, probably by a bird strike. This hypothesis is consistent with the traces extracted from the flight data recorder, including the strange-looking wiggles at the very end of the flight. I wonder if there’s hope of finding the lost vane, which shouldn’t be far from the end of the runway.

Quiet Earth: Dead By Dawn Fest Announces Line-up [Festival]

With a stellar line-up of the best in new independent and international horror features and shorts, Scotland's premiere horror film festival - Dead by Dawn - returns to Edinburgh's Film House this April 18th-21st. Our own Simon Read will be in attendance, providing coverage and reviews.


The line-up this year includes the UK premiers of G Patrick Condon's meta-horror Incredible Violence, Rasmus Kloster Bro's claustrophobic Cutterhead and Juuso Laatio & Jukka Vidgren's deadpan metal-head comedy Heavy Trip.


From Germany, Tilman Singer's bizarro demonic chiller LUZ will be screening, and from France director Quarxx's ferociously dark Tous Les Dieux du Ciel. Brett Simmons' Summer camp nightmare flick You Might be the Killer - starring Cabin in [Continued ...]

Planet Lisp: Didier Verna: Quickref 2.0 "Be Quick or Be Dead" is released

Surfing on the energizing wave of ELS 2019, the 12 European Lisp Symposium, I'm happy to announce the release of Quickref 2.0, codename "Be Quick or Be Dead".

The major improvement in this release, justifying an increment of the major version number (and the very appropriate codename), is the introduction of parallel algorithms for building the documentation. I presented this work last week in Genova so I won't go into the gory details here, but for the brave and impatient, let me just say that using the parallel implementation is just a matter of calling the BUILD function with :parallel t :declt-threads x :makeinfo-threads y (adjust x and y as you see fit, depending on your architecture).

The second featured improvement is the introduction of an author index, in addition to the original one. The author index is still a bit shaky, mostly due to technical problems (calling asdf:find-system almost two thousand times simply doesn't work) and also to the very creative use that some library authors have of the ASDF author and maintainer slots in the system descriptions. It does, however, a quite decent job for the majority of the authors and their libraries'reference manuals.

Finally, the repository now has a fully functional continuous integration infrastructure, which means that there shouldn't be anymore lags between new Quicklisp (or Quickref) releases and new versions of the documentation website.

Thanks to Antoine Hacquard, Antoine Martin, and Erik Huelsmann for their contribution to this release! A lot of new features are already in the pipe. Currently documenting 1720 libraries, and counting...

Jesse Moynihan: Tarot Booklet Page 7

Fitting all this info on a small page is tough. VI The Lover Three figures below: a maiden to the right, a masculine-ish person, and a older feminine-ish person to the left. Their hands are a tangled maze. Between the man and the maiden, whose hand is whose? Why is one hand turned the wrong […]

Charles Petzold: Reading “Sounds Like Titanic”

Jessica Chiccehitto Hindman is four years old when she first hears the music that makes her want to play the violin. It was an animated film called Sarah and the Squirrel about a young girl escaping the Holocaust, accompanied by music the likes of which she had never heard. “Violin music,” her father tells her, and from that moment on, she wants to play that music. Much later she will learn that the music in the movie is the opening of the Winter concerto from Vivaldi’s Four Seasons, and chills go up my spine just imagining a four-year-old hearing that music for the very first time.

... more ...

Daniel Lemire's blog: Science and Technology links (April 6th 2019)

  1. In a randomized trial where people reduced their caloric intake by 15% for two years, it was found that reducing calories slowed aging. This is well documented in animals, going all the way to worms and insects, but we now have some evidence that it applies to human being as well. Personnally I do not engage in either caloric restriction or fasting, but I am convinced it would be good for me to do so.
  2. What is the likely economic impact of climate change over the coming century? We do not know for sure. However, all estimates point to a modest impact, always significantly less than 10% of the size of the economy over a century while the world’s economy grows at about 3% a year.

    Clearly, 27 estimates are a thin basis for drawing definitive conclusions about the total welfare impacts of climate change. (…) it is unclear whether climate change will lead to a net welfare gain or loss. At the same time, however, despite the variety of methods used to estimate welfare impacts, researchers agree on the order of magnitude, with the welfare change caused by climate change being equivalent to the welfare change caused by an income change of a few percent. That is, these estimates suggest that a century of climate change is about as good/bad for welfare as a year of economic growth.

  3. Is the scientific establishment biased against women or not? Miller reports on new research showing that men tend to reject evidence of bias whereas women tend to reject contrary evidence.
  4. Technology greatly improved the productivity of farming. We are often told that the reason we did not see famines on a massive scale despite earlier predictions to that effect (e.g., by the Club of Rome) is due to the so-called Green Revolution. It is seems that this not well founded on facts:

    We argue a political myth of the Green Revolution focused on averted famine is not well grounded in evidence and thus has potential to mislead to the extent it guides thinking and action related to technological innovation. We recommend an alternative narrative: The Green Evolution, in which sustainable improvements in agricultural productivity did not necessarily avert a global famine, but nonetheless profoundly shaped the modern world.

  5. Sugar does not give your mood a boost. We do not feel more energetic after eating sugar.
  6. Though e-cigarettes are probably incomparably safer than actual cigarettes, people have been banning them on the ground that e-cigarettes might be a gateway toward cigarettes. They are likely wrong. If anything, e-cigarettes are probably a solution for people who have not managed to stop smoking by other means. They have been found to be a highly effective way to stop smoking. Thus e-cigarettes are likely saving lifes; people who ban e-cigarettes despite the evidence should have to answer for the consequences of their choices.
  7. People who think that little boys are more physically aggressive than little girls because of how they are raised are likely wrong.
  8. I am impressed with the courage of these researchers: Oral sex is associated with reduced incidence of recurrent miscarriage (Journal of Reproductive Immunology, 2019).

Quiet Earth: Hollywood Dreams Go Sour in Comedy BERSERK [Review]

Strange things happen everywhere but Hollywood is one of those places more prone to odd occurrences than most. Maybe it's something in the air or the water but most likely, it's due to the fact that Hollywood is a place that attracts people in larger numbers than most.


Written, directed and starring Rhys Wakefield who is best known for his turns in "True Detective" and as the creepy leader of the roving mob in The Purge, Berserk stars Wakefield as Evan, an aspiring actor/screenwriter who loses his current acting gig and his manager both in the opening scene of the movie. He pleads, unsuccessfully, for a second chance but his former manager leaves the door open: bring her a finished script with Raffy (Nick Cannon), a bonafide Hollywood star who Evan just happens to be goo [Continued ...]

OUR VALUED CUSTOMERS: To his friend...


Disquiet: Disquiet Junto Project 0379: Open Studios

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is Monday, April 8, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the morning, California time, on Thursday, April 4, 2019.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0379: Open Studios
The Assignment: Share a track, get feedback, and give feedback.

Step 1: The purpose of this week’s project is to provide participants opportunities to get feedback on works-in-progress. Consider work you’re doing you’d appreciate responses to from fellow Junto participants.

Step 2: Either upload an existing recording (sketches and mid-process takes may prove optimal), or record something new and post it online for feedback. If there are some things in particular you’d like feedback on, mention what they are.

Step 3: After uploading, be sure to listen to the work of other participants, and to post responses.

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0379” (no spaces or quotation marks) in the name of your track.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0379” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation a project playlist.

Step 3: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 4: Post your track in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0379-open-studios/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details:

Deadline: This project’s deadline is Monday, April 8, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the morning, California time, on Thursday, April 4, 2019.

Length: The length is up to you.

Title/Tag: When posting your track, please include “disquiet0379” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: Consider setting your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 379th weekly Disquiet Junto project — Open Studios / The Assignment: Share a track, get feedback, and give feedback — at:

https://disquiet.com/0379/

More on the Disquiet Junto at:

https://disquiet.com/junto/

Subscribe to project announcements here:

http://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0379-open-studios/

There’s also on a Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

Image associated with this project adapted (cropped, colors changed, text added, cut’n’paste) thanks to a Creative Commons license from a photo credited to Matthew Ebel:

https://flic.kr/p/SJYUSf

https://creativecommons.org/licenses/by-nc-sa/2.0/

Tea Masters: The very green Qingxin Oolong

This tea is confusing, because it's a Qingxin Oolong green tea! Wait, what? An Oolong green tea? Do I mean it's one of those very lightly oxidized Oolongs sometimes called 'nuclear green'?
No these are still Oolongs. The explanation to this riddle is that this is a green tea (= with 0 oxidation). But it's made from a tea cultivar named 'Qingxin Oolong' (aka ruanzhi Oolong or soft stem Oolong). That's because you can process any tea leaf the way you wish. You could also make white tea or red tea with Qingxin Oolong leaves!
This spring 2017 Green Qingxin Oolong was harvested on April 25th, 2017. I have stored it vacuum-sealed and the freshness is still very present in the dry aromas. To celebrate spring, this tea now my gift for orders in excess of 60 USD (excluding shipping) and below 200 USD.
I'm brewing this tea in a (preheated) thin white porcelain bowl (by David Louveau). I make the leaves turn thanks to the pour of water from the (silver kettle) and, later, by lightly using a porcelain soup spoon to lightly make the leaves dance. This helps them to unfold and release their aromas.
 This method works well with green tea. Its purpose isn't a very strong cup, but a light one. Once you smell or see that it's ready, you can pour the tea in the cups with the soup spoon. And if it's becoming too strong, you simply add more hot water in the bowl.
This green tea is interesting, because it shows the character of the Qingxin Oolong (famous for Hung Shui and High Mountain Oolong) as green tea. Its freshly cut grass notes, typical of green, have a very high note, very refined. And in terms of taste, there's a good mellow taste feeling when brewed lightly, but it turns bitter if it's left to brew too long. It's full of 'green' energy!
The biggest difference with Oolong, is that the leaves are mostly buds or very small. This leads to this kind of beautiful picture. Imagine using the spoon to let these 2 buds dance!...

Disquiet: Davachi in Pale Bloom

When the held chord, all wavering sine waves, gives way to something else, when that foregrounded drone — somehow both a mainstay of experimental electronic music, and also the easiest of easy listenings — becomes background, and when that something else that comes to the fore is a piano, then something is most certainly up.

The initial chord is, soon, layered with another, higher chord, and the combination yields slow moving gusts of moiré patterns. The pair sandwiches the sequence of gentle piano phrases. The eternal hold of those chords balances against, contrasts with, the natural quieting of each struck piano figure, which are spaced out to draw focused attention. This is “Perfumes III,” the initial track release from the forthcoming album Pale Bloom, due out at the end of May from Sarah Davachi.

Davachi has made a name for herself in recent years as a thoughtful and dedicated synthesizer musician, and Pale Bloom apparently is a reunion of sorts, connecting back to an instrument of her youth. On this track, Hammond organ is a source of the droning backdrop to her piano. Another track on the full release, the 20-minute “If It Pleased Me to Appear to You Wrapped in This Drapery,” will include the synthesizer her growing audience has come to expect.

Track originally posted at sarahdavachi.bandcamp.com. More from Davachi at sarahdavachi.com.

OUR VALUED CUSTOMERS: While discussing comic book movies...


Tea Masters: Brew the flower inside

Taiwan Oolong teas often have scents of flowers. Sometimes, it's because they are artificially or flower scented. But most of the time, these aromas develop naturally during the partial oxidation production process. Here resides one of the great mysteries and beauty of tea: a green leaf that can be turned into a flower! And a special kind of flower, one that keeps its fragrance for the moment YOU choose! 
 And contrary to tea bags, whole leaf Oolongs can be brewed again and again and again... The power of the aromas may diminish little by little, but it's still fun to play the game of seeing how far the leaves will take you, how many good brews you're getting! This is especially true with traditionally roasted Oriental Beauty.
Close your eyes while you're drinking your Oolong tea. What flower can you smell and see?

i like this art: Naoya Hatakeyama

Naoya Hatakeyama

Work from BLAST.

“Do not be afraid, yet do not make light of nature. Always keep the gods in mind with prayer…

Imagine a huge piece of rock, like a mountain, that you wanted to take a part of home. How would you do it? If you had a hammer, like a geologist, you could wing it down and crack off a piece to put in your pocket. If you noticed a fissure in the rock, you could put a wedge or chisel in to obtain a larger piece. They say that when Hannibal of Carthage crossed the Alps with his elephants, he made fires around huge rocks and poured water over the heated surfaces so that they split, and by repeating this was able to create a road for the troops to advance along. With hammer, chisel, wedge, fire and water, rock can be turned into small, transportable pieces. But what if we need a huge amount of these broken rocks? What if we need enough to make a city out of them? Hammers and chisels are out of the question. Fire and water are not sufficient. We need greater force. That is demanded by our modern age: a force as big as our modern desires.”
-Naoya Hatakeyama, excerpt from afterword, BLAST

OCaml Weekly News: OCaml Weekly News, 02 Apr 2019

  1. Cstruct.4.0.0: sexplib goes optional
  2. New library - uritemplate 0.1.0
  3. Check opam's health for the upcoming OCaml release (4.08)
  4. Turn echoing off on standard input to read e.g. passwords
  5. http2/af: An HTTP/2 implementation for OCaml
  6. Release of OCamlFormat 0.9
  7. Other OCaml News

i like this art: Bessma Khalaf

Bessma Khalaf

Works from Torch Song.

“Khalaf’s sublime, black and white, natural world landscapes offer up beauty and uncertainty. Using various processes of degradation (burning, smashing, consuming) the artist re-imagines the natural world, taking the viewer beyond the nihilism of destruction, into the generative possibilities that are offered by voids and absences. Troubling, and all too relevant, the photographs include the destruction of nature in their process; in many of the works, the source of that violence is fire. Each photograph contains burned areas where Khalaf sets afire a section of the composition. Leaving part of the photo to burn away, she then extinguishes it and photographs whatever is left over, creating new images. While suggesting destruction, Khalaf’s unmaking doesn’t spiral entirely into nothingness, but leaves an absence as a relic of her action. Ultimately she demonstrates that it is still possible to discover unexpected beauty in destruction and, in so doing, opens up the viewer’s sense of the sublime.

When pitting herself against the overwhelming vastness of her surroundings, or the largess of romantic landscape, Khalaf mixes mysticism, futility, and endurance. Rather than attacking images, Khalaf intervenes on the landscape itself; the results suggest the difference between images and the world.” – Romer Young Gallery

MattCha's Blog: Famous Puerh In the West: 2003 HK Henry “Conscientious Prescription”


Reason For Fame: This puerh was the first well documented instance in English of a young puerh that was initially very impalpable and undrinkable aging into something very enjoyable.  The details can be found in this blog post by Hobbes on the famous Half-Dipper here and here.

There are many different transliterations of the name of this cake out there but they are all referring to the same cake.  Here are some alternative names with the links to articles which used them:

2003 HK Henry “Conscientious Prescription”


2003 Hong Kong Henry “Conscientious Prescription” 7542


2003 Menghai Hong Kong Henry 7542 (“Scholarly Tea”)


2003 Henry Trading Co.HK Ltd."Seriously Formula" Ching Beeng 7542


2003 HK Henry Specially Ordered 7542 Menghai


As mentioned in the Half-Dipper post, this tea was for sale at Hou De Asian Arts in 2007 for $78.00 for 357g cake or $0.22/g.  Back then this tea was really expensive, and probably a bit overpriced for its age at that time.  Remarkably this cake now sells for around $130.00 and can be found with varying storage options and from various vendors.

Teas We Like feature a dry Taiwanese Stored version of this cake for $130.  The Essence of Tea offers a Hong Kong then Malaysian stored more humid version also now priced at $130which is currently sold out but may possibly be re-stocked.  I have also heard that it is sometimes available from the Taiwanese Facebook puerh auctions.  The options on this cake are many mainly because I think this puerh is actually more famous in Western puerh circles than it is in Asia because of the above reason for fame.

The lesson it taught me was this: if a puerh initially has a “Burning Acid in Throat” taste and throatfeel this quality will likely turn into an enjoyable “Stronger throatfeel with Sour aftertaste” with some moderate to heavier humid storage behind it.  Last year I read some similar tasting notes on a different puerh and I bought a bunch of this up and is it ever tasty (I need to post about this one other one soon I think).

Anyways, as you readers may or may not remember this very tea sold out on me before it was quietly restocked by Essence of Tea before Black Friday. I think the re-stocked price was even cheaper than the price they originally marked it at (or maybe the exchange is just more favorable now).  Either way, good for me.  It was included in my order which also included this, this, and this- all nice teas by my estimation.

Ok, redemption time… let’s see what Hong Kong/ Maylasian stored HK Henry is all about.

Dry leaves are greyish typical of heavier humid/ Hong Kong storage and smell of old library but more sweet and grainy.

The first infusion starts with a slightly sour smooth woody onset which catches me off guard at first.  Its tea body is slight watery here and a mild cooling camphours aftertaste with mild creamy sweet base.  It tastes more light and deep and almost fruity.

The second infusion has a sour almost dried and candied grapefruit taste, if you can imagine it.  It has a smooth pine tree base taste and a faint creamy sweetness underneath.  The pungent camphor is cooling in the aftertaste.  The mouthfeel is a bit gripping at the throat.

The third is smoother and more cohesive.  It starts with some sour notes over an increasingly woody pine aged leaves base taste.  There is that grapefruit sourness thoughout.  The tea liquor is light and spacious.  The mouthfeel is slightly drying, slightly coarse on the tongue and gripping in the throat.  Menthol on breath.  Long dried grapefruit aftertaste of slight sour\ bitter.  The Qi is slight heavy in the head and behind the eyes.

The fourth infusion delivers a smooth slightly sour onset with an aged grapefruit like taste with pine woods and old leaves taste.  The sourness apparent in this puerh gives this medium humid stored and aged puerh a fresh zesty feeling which makes it unique.

The fifth infusion starts more creamy sweet wood along sour grapefruit.  The pine taste is stronger on the breath than body and the cooling camphor taste is there too.  There is a mineral stone like taste in the infusion also.  The liquor and body is light and almost dry but mildly gripping.  The Qi starts to feel mildly dizzying.

The sixth infusion is almost bean tasting along with less sour in the initial taste.  There are still wood notes under there as well as grapefruit.  This infusion is become less sour and drier wood overall.  The cooling camphor aftertaste brings the most fruity grapefruit tastes out long in the aftertaste.

The seventh infusion is watering out a bit.  The viscosity of the liquor is not the strength of this tea.  There is an interesting incense note, pine wood, camphor.  The fruity grapefruit is very faint in the aftertaste only now.  The throat feel has a mild gripping sensation.

The eighth starts woody, incense, pine wood, long mild apricot and grapefruit taste under camphor wood.  The ninth is much the same.  The profile of this puerh is relatively simple but pretty delicious.

The tenth I add 10 seconds to the flash infusion it results in more woody tastes being pushes out.  The mouthfeeeling and throatfeeling are more watery then gripping here but there is a little of that.  This tea develops kind of a smoothness here.  The aftertaste is mildly fruity.

11th I add 20 seconds to the flash and get aged but nicely refreshing pine woods, and dried apricots with not as sour tastes now but a little in the aftertaste.  Although this tea is not overly complex, it is interesting enough, clean, feels nice in the body and makes me feel light. 

12th I add 30 seconds to the flash and it really is much the same with a slightly more gripping mouthfeeling.  The long fruity taste is nice here.  The tastes are really clean in here.

13th infusion is about at 60 seconds past flash and delivers more fermented autumn leaves, woods up front with some bitterness.  There is that same camphor and slight fruit in there.

The 14th is another long infusion pushing out mainly woods and autumn leaves, there is some barely fruity sweet under some bitter and some sour.  A fresh clean menthol remains.

I long steep this one a handful more times.  I get some nice but not overpowering woody tastes with menthol and dried sour fruit.

Overall, this tea is really clean and pure, it has an interesting and unique sour fruit and pine wood profile and simulating mild gripping throatfeeling and solid menthol aftertaste.  However, its liquor is on the thinner side even with the teapot stuffed with leaves and its change from infusion to infusion is slight.  I enjoy this one for the smooth and easy drinking for sure.  I can only imagine that this more humidly stored version was probably closer to being stored in a way that this cake was originally intended when Hong Kong Henry Co. Commissioned it. 
I think I would have been pretty content with more of these but I don’t think I will seek out more…

…Maybe just a cake of the Taiwanese dry storage just for some fun comparison...

Peace

The Shape of Code: MI5 agent caught selling Huawei exploits on Russian hacker forums

An MI5 agent has been caught selling exploits in Huawei products, on an underground Russian hacker forum (a paper analyzing the operation of these forums; perhaps the researchers were hired as advisors). How did this news become public? A reporter heard Mr Wang Kit, a senior Huawei manager, complaining about not receiving a percentage of the exploit sale, to add to his quarterly sales report. A fair point, given that Huawei are funding a UK centre to search for vulnerabilities.

The ostensive purpose of the Huawei cyber security evaluation centre (funded by Huawei, but run by GCHQ; the UK’s signals intelligence agency), is to allay UK fears that Huawei have added back-doors to their products, that enable the Chinese government to listen in on customer communications.

If this cyber centre finds a vulnerability in a Huawei product, they may or may not tell Huawei about it. Obviously, if it’s an exploitable vulnerability, and they think that Huawei don’t know about it, they could pass the exploit along to the relevant UK government department.

If the centre decides to tell Huawei about the vulnerability, there are two good reasons to first try selling it, to shady characters of interest to the security services:

  • having an exploit to sell gives the person selling it credibility (of the shady technical kind), in ecosystems the security services are trying to penetrate,
  • it increases Huawei’s perception of the quality of the centre’s work; by increasing the number of exploits found by the centre, before they appear in the wild (the centre has to be careful not to sell too many exploits; assuming they manage to find more than a few). Being seen in the wild adds credibility to claims the centre makes about the importance of an exploit it discovered.

How might the centre go about calculating whether to hang onto an exploit, for UK government use, or to reveal it?

The centre’s staff could organized as two independent groups; if the same exploit is found by both groups, it is more likely to be found by other hackers, than an exploit found by just one group.

Perhaps GCHQ knows of other groups looking for Huawei exploits (e.g., the NSA in the US). Sharing information about exploits found, provides the information needed to more accurately estimate the likelihood of others discovering known exploits.

How might Huawei estimate the number of exploits MI5 are ‘selling’, before officially reporting them? Huawei probably have enough information to make a good estimate of the total number of exploits likely to exist in their products, but they also need to know the likelihood of discovering an exploit, per man-hour of effort. If Huawei have an internal team searching for exploits, they might have the data needed to estimate exploit discovery rate.

Another approach would be for Huawei to add a few exploits to the code, and then wait to see if they are used by GCHQ. In fact, if GCHQ accuse Huawei of adding a back-door to enable the Chinese government to spy on people, Huawei could claim that the code was added to check whether GCHQ was faithfully reporting all the exploits it found, and not keeping some for its own use.

Charles Petzold: Reflections on Rereading “Slaughterhouse-Five”

Kurt Vonnegut’s novel Slaughterhouse-Five or The Children’s Crusade: A Duty Dance with Death was published 50 years ago today, as I was informed by several articles, including this one and this one in the New York Times. Fortunately, I rarely dispose of books that I’ve bought, so I was able to pluck the $1.95 paperback off my shelf and read it again, probably for the first time since my college years in the early 1970s. (Check your own bookshelves; you might find a copy as well.)

... more ...

s mazuk: furtho:Benjamin Lee’s aerial photograph of the Takaosan...



furtho:

Benjamin Lee’s aerial photograph of the Takaosan Interchange, Hachioji, Japan (via here)

Daniel Lemire's blog: Science and Technology links (March 30th 2019)

  1. As we age, we accumulate old and useless (senescent) cells. These cells should die, but they do not. Palmer et al. removed senescent cells in obese mice. They found that these mice were less diabetic and just generally healthier. That is, it appears that many of the health problems due to obesity might have to do with the accumulation of senescent cells.
  2. Europe is changing its copyright laws to force websites to be legally responsible for the content that users upload. In my opinion, copyright laws tend to restrict innovation. I also think that Europe is generally not interesting in innovating: where is Europe’s Google or Europe’s Samsung?
  3. China is cloning police dogs.
  4. Do we create new neurons throughout life, or not? It remains a controversial question, but a recent article in Nature seems to indicate that neurogenesis in adult human beings is tangible:

    By combining human brain samples obtained under tightly controlled conditions and state-of-the-art tissue processing methods, we identified thousands of immature neurons in (…) neurologically healthy human subjects up to the ninth decade of life. These neurons exhibited variable degrees of maturation (…) In sharp contrast, the number and maturation of these neurons progressively declined as Alzheimer’s Disease advanced.

  5. Generally speaking, the overall evidence is that fit and healty people tend to be smarter. It is a myth unsupported by science that the gym rat is dumb whereas the pale out-of-shape guy is smart.If you want to be smart, you better stay fit and healthy. Evidently, this suggests that as you age, you may become lose some of your intellectual sharpness.Cornelis et al. processed a large dataset of cognitive tests and they conclude that you are not losing your intelligence very much, at least until you reach a typical retirement age:

    declines in cognitive abilities between the end of the fourth decade and age 65 are small.

    In their experiments, fluid intelligence (basically our reasoning ability) did not change very much and sometimes increased over time. This apparently contradict other studies based on smaller samples, and the authors discuss this apparent contradiction. Reaction time increased with age: older people are slower, everything else being equal.

The Shape of Code: The 2019 Huawei cyber security evaluation report

The UK’s Huawei cyber security evaluation centre oversight board has released it’s 2019 annual report.

The header and footer of every page contains the text “SECRET”“OFFICIAL”, which I assume is its UK government security classification. It lends an air of mystique to what is otherwise a meandering management report.

Needless to say, the report contains the usually puffery, e.g., “HCSEC continues to have world-class security researchers…”. World class at what? I hear they have some really good mathematicians, but have serious problems attracting good software engineers (such people can be paid a lot more, and get to do more interesting work, in industry; the industry demand for mathematicians, outside of finance, is weak).

The most interesting sentence appears on page 11: “The general requirement is that all staff must have Developed Vetting (DV) security clearance, …”. Developed Vetting, is the most detailed and comprehensive form of security clearance in UK government (to quote Wikipedia).

Why do the centre’s staff have to have this level of security clearance?

The Huawei source code is not that secret (it can probably be found online, lurking in the dark corners of various security bulletin boards).

Is the real purpose of this cyber security evaluation centre, to find vulnerabilities in the source code of Huawei products, that GCHQ can then use to spy on people?

Or perhaps, this centre is used for training purposes, with staff moving on to work within GCHQ, after they have learned their trade on Huawei products?

The high level of security clearance applied to the centre’s work is the perfect smoke-screen.

The report claims to have found “Several hundred vulnerabilities and issues…”; a meaningless statement, e.g., this could mean one minor vulnerability and several hundred spelling mistakes. There is no comparison of the number of vulnerabilities found per effort invested, no comparison with previous years, no classification of the seriousness of the problems found, no mention of Huawei’s response (i.e., did Huawei agree that there was a problem).

How many vulnerabilities did the centre find that were reported by other people, e.g., the National Vulnerability Database? This information would give some indication of how good a job the centre was doing. Did this evaluation centre find the Huawei vulnerability recently disclosed by Microsoft? If not, why not? And if they did, why isn’t it in the 2019 report?

What about comparing the number of vulnerabilities found in Huawei products against the number found in vendors from the US, e.g., CISCO? Obviously back-doors placed in US products, at the behest of the NSA, need not be counted.

There is some technical material, starting on page 15. The configuration and component lifecycle management issues raised, sound like good points, from a cyber security perspective. From a commercial perspective, Huawei want to quickly respond to customer demand and a dynamic market; corners are likely to be cut off good practices every now and again. I don’t understand why the use of an unnamed real-time operating system was flagged: did some techie gripe slip through management review? What is a C preprocessor macro definition doing on page 29? This smacks of an attempt to gain some hacker street-cred.

Reading between the lines, I get the feeling that Huawei has been ignoring the centre’s recommendations for changes to their software development practices. If I were on the receiving end, I would probably ignore them too. People employed to do security evaluation are hired for their ability to find problems, not for their ability to make things that work; also, I imagine many are recent graduates, with little or no practical experience, who are just repeating what they remember from their course work.

Huawei should leverage its funding of a GCHQ spy training centre, to get some positive publicity from the UK government. Huawei wants people to feel confident that they are not being spied on, when they use Huawei products. If the government refuses to play ball, Huawei should shift its funding to a non-government, open evaluation center. Employees would not need any security clearance and would be free to give their opinions about the presence of vulnerabilities and ‘spying code’ in the source code of Huawei products.

i like this art: Awoiska van der Molen

Awoiska van der Molen

Work from her oeuvre.

“If Nature were to take a photo of itself, what would it look like? Nature would set its own exposure time, with plenty of lux during the day in the sunshine, and clear and dark at night with a full moon. Exposure would need to be lengthy, since Nature, in the sense of a group of living things in a landscape of rock and uneven ground, exists on an organic timescale of minutes, weeks, years. A shutter speed of one hundredth of a second offers merely the perspective of a carrier pigeon, which can distinguish 125 images per second. But a tree in all its glory in its natural setting requires an exposure of half an hour in the light of a crescent moon, or half a minute when the sun is low in the sky. The distance between subject and camera is a matter of metres, or tens of metres if you want to capture the bliss of a fringe of woodland on a hilltop at dusk, or hundreds of metres for a mountain turning its creased rhino-back to you in a gesture of friendship. What would be the joy revealed to us by Nature — specifically that little gathering of delicate, twiggy, dancing trees in white light — if it weren’t photographed and turned into an image that we, viewers, human beings, recognise as proof of living reality?” – Arjen Mulder excerpted from an essay for van der Molen’s monograph, Blanco.

The Geomblog: On PC submissions at SODA 2020

SODA 2020 (in SLC!!) is experimenting with a new submission guideline: PC members will be allowed to submit papers. I had a conversation about this with Shuchi Chawla (the PC chair) and she was kind enough (thanks Shuchi!) to share the guidelines she's provided to PC members about how this will work.


SODA is allowing PC members (but not the PC chair) to submit papers this year. To preserve the integrity of the review process, we will handle PC member submissions as follows. 
1. PC members are required to declare a conflict for papers that overlap in content with their own submissions (in addition to other CoI situations). These will be treated as hard conflicts. If necessary, in particular if we don't have enough confidence in our evaluation of a paper, PC members will be asked to comment on papers they have a hard conflict with. However, they will not have a say in the final outcome for such papers.  
2. PC submissions will receive 4 reviews instead of just 3. This is so that we have more confidence on our evaluation and ultimate decision. 
3. We will make early accept/reject decisions on PC members submissions, that is, before we start considering "borderline" papers and worrying about the total number of papers accepted. This is because the later phases of discussion are when subjectivity and bias tend to creep in the most. 
4. In order to be accepted, PC member submissions must receive no ratings below "weak accept" and must receive at least two out of four ratings of "accept" or above.  
5. PC member submissions will not be eligible for the best paper award.

My understanding is that this was done to solve the problem of not being able to get people to agree to be on the PC - this year's PC has substantially more members than prior years.

And yet....

Given all the discussion about conflicts of interest, implicit bias, and double blind review, this appears to be a bizarrely retrograde move, and in fact one that sends a very loud message that issues of implicit bias aren't really viewed as a problem. As one of my colleagues put it sarcastically when I described the new plan:

"why don't they just cut out the reviews and accept all PC submissions to start with?"
and as another colleague pointed out:

"It's mostly ridiculous that they seem to be tying themselves in knots trying to figure out how to resolve COIs when there's a really easy solution that they're willfully ignoring..."

Some of the arguments I've been hearing in support of this policy frankly make no sense to me.

First of all, the idea that a more heightened scrutiny of PC papers can alleviate the bias associated with reviewing papers of your colleagues goes against basically all of what we know about implicit bias in reviewing. The most basic tenet of human judgement is that we are very bad at filtering our own biases and this only makes it worse. The one thing that theory conferences (compared to other venues) had going for them regarding issues of bias was that PC members couldn't submit papers, but now....

Another claim I've heard is that the scale of SODA makes double blind review difficult. It's hard to hear this claim without bursting out into hysterical laughter (and from the reaction of the people I mentioned this to, I'm not the only one).  Conferences that manage with double blind review (and PC submissions btw) are at least an order of magnitude bigger (think of all the ML conferences). Most conference software (including easy chair) is capable of managing the conflicts of interest without too much trouble. Given that SODA (and theory conferences in general) are less familiar with this process, I’ve recommended in the past that there be a “workflow chair” whose job it is to manage the unfamiliarity associated with dealing the software. Workflow chairs are common at bigger conferences that typically deal with 1000s of reviewers and conflicts.

Further, as a colleague points out, what one should really be doing is "aligning nomenclature and systems with other fields: call current PC as SPC or Area Chairs, or your favorite nomenclature, and add other folks as reviewers. This way you (i) get a list of all conflicts entered into the system, and (ii) recognize the work that the reviewers are doing more officially as labeling the PC members. "


Changes in format (and culture) take time, and I'm still hopeful that the SODA organizing team  will take a lesson from ESA 2019  (and their own resolution to look at DB review more carefully that was passed a year or so ago) and consider exploring DB review. But this year's model is certainly not going to help.

Update: Steve Blackburn outlines how PLDI handles PC submissions (in brief, double blind + external review committee)

Update: Michael Ekstrand takes on the question that Thomas Steinke asks in the comments below: "How is double blind review different from fairness-through-blindness?".

Disquiet: Disquiet Junto Project 0378: Blue(tooth) Haze

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is Monday, April 1, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the afternoon, California time, on Thursday, March 28, 2019.

Tracks will be added to the playlist for the duration of the project.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0378: Blue(tooth) Haze
The Assignment: Experiment with the sonic qualities of a failing signal.

Step 1: Find some sort of Bluetooth-enabled audio connection that is available to you. It might be headphones or microphone or other devices. The important thing is that audio can be sent to one device from another device by Bluetooth.

Step 2: Experiment with a sound sent via Bluetooth using the connection decided upon in Step 1. Work to find situations in which Bluetooth begins to fail, where the sonic signature of that signal failure becomes apparent. This will likely be due to distance, but you may find other creative approaches to achieve the distortion.

Step 3: Use the situation(s) located in Step 2 as the basis for an original piece of music, stressing an audio signal and then recording the way that signal distorts due to the failure of Bluetooth.

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0378” (no spaces or quotation marks) in the name of your track.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0378” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation a project playlist.

Step 3: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 4: Post your track in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0378-blue-tooth-haze/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details:

Deadline: This project’s deadline is Monday, April 1, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the afternoon, California time, on Thursday, March 28, 2019.

Length: The length is up to you. Short is good.

Title/Tag: When posting your track, please include “disquiet0378” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: Consider setting your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 378th weekly Disquiet Junto project — Disquiet Junto Project 0378: Blue(tooth) Haze / The Assignment: Experiment with the sonic qualities of a failing signal — at:

https://disquiet.com/0378/

More on the Disquiet Junto at:

https://disquiet.com/junto/

Subscribe to project announcements here:

http://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0378-blue-tooth-haze/

There’s also on a Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

Image associated with this project adapted (cropped, colors changed, text added, cut’n’paste) thanks to a Creative Commons license from a photo credited to Russell Davies:

https://flic.kr/p/2aKR98

https://creativecommons.org/licenses/by-nc/2.0/

Planet Lisp: Lispers.de: Lisp-Meetup in Hamburg on Monday, 1st April 2019

We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 1st April 2019.

This is an informal gathering of Lispers. Svante will talk a bit about the implementation of lispers.de. You are invited to bring your own topics.

The Shape of Code: Using Black-Scholes in software engineering gives a rough lower bound

In the financial world, a call option is a contract that gives the buyer the option (but not the obligation) to purchase an asset, at an agreed price, on an agreed date (from the other party to the contract).

If I think that the price of jelly beans is going to increase, and you disagree, then I might pay you a small amount of money for the right to buy a jar of jelly beans from you, in a month’s time, at today’s price. A month from now, if the price of Jelly beans has gone down, I buy a jar from whoever at the lower price, but if the price has gone up, you have to sell me a jar at the previously agreed price.

I’m in the money if the price of Jelly beans goes up, you are in the money if the price goes down (I paid you a premium for the right to purchase at what is known as the strike price).

Do you see any parallels with software development here?

Let’s say I have to rush to complete implementation some functionality by the end of the week. I might decide to forego complete testing, or following company coding practices, just to get the code out. At a later date I can decide to pay the time needed to correct my short-cuts; it is possible that the functionality is not used, so the rework is not needed.

This sounds like a call option (you might have thought of technical debt, which is, technically, the incorrect common usage term). I am both the buyer and seller of the contract. As the seller of the call option I received the premium of saved time, and the buyer pays a premium via the potential for things going wrong. Sometime later the seller might pay the price of sorting out the code.

A put option involves the right to sell (rather than buy).

In the financial world, speculators are interested in the optimal pricing of options, i.e., what should the premium, strike price and expiry date be for an asset having a given price volatility?

The Black-Scholes equation answers this question (and won its creators a Nobel prize).

Over the years, various people have noticed similarities between financial options thinking, and various software development activities. In fact people have noticed these similarities in a wide range of engineering activities, not just computing.

The term real options is used for options thinking outside of the financial world. The difference in terminology is important, because financial and engineering assets can have very different characteristics, e.g., financial assets are traded, while many engineering assets are sunk costs (such as drilling a hole in the ground).

I have been regularly encountering uses of the Black-Scholes equation, in my trawl through papers on the economics of software engineering (in some cases a whole PhD thesis). In most cases, the authors have clearly failed to appreciate that certain preconditions need to be met, before the Black-Scholes equation can be applied.

I now treat use of the Black-Scholes equation, in a software engineering paper, as reasonable cause for instant deletion of the pdf.

If you meet somebody talking about the use of Black-Scholes in software engineering, what questions should you ask them to find out whether they are just sprouting techno-babble?

  • American options are a better fit for software engineering problems; why are you using Black-Scholes? An American option allows the option to be exercised at any time up to the expiry date, while a European option can only be exercised on the expiry date. The Black-Scholes equation is a solution for European options (no optimal solution for American options is known). A sensible answer is that use of Black-Scholes provides a rough estimate of the lower bound of the asset value. If they don’t know the difference between American/European options, well…
  • Partially written source code is not a tradable asset; why are you using Black-Scholes? An assumption made in the derivation of the Black-Scholes equation is that the underlying assets are freely tradable, i.e., people can buy/sell them at will. Creating source code is a sunk cost, who would want to buy code that is not working? A sensible answer may be that use of Black-Scholes provides a rough estimate of the lower bound of the asset value (you can debate this point). If they don’t know about the tradable asset requirement, well…
  • How did you estimate the risk adjusted discount rate? Options involve balancing risks and getting values out of the Black-Scholes equation requires plugging in values for risk. Possible answers might include the terms replicating portfolio and marketed asset disclaimer (MAD). If they don’t know about risk adjusted discount rates, well…

If you want to learn more about real options: “Investment under uncertainty” by Dixit and Pindyck, is a great read if you understand differential equations, while “Real options” by Copeland and Antikarov contains plenty of hand holding (and you don’t need to know about differential equations).

Better Embedded System SW: Potentially deadly automotive software defects

Here's a list of potentially deadly automotive software defects, mostly from NHTSA Recall notices.

There is still a lot of resistance to the idea that car software can have fatal defects that result in deaths not due to driver error. In fact such defects do exist, and for many of them we've just gotten lucky that few or no people have died as a result. Recently we've been seeing more deadly software defects being reported. This posting is intended to give a taste of what's been going on in automotive software quality. This is a very partial list of bad software that was deployed on production vehicles in the US.

This list includes a variety of subsystems including unintended acceleration, steering failures, brake assist failures, headlights going out while driving, and quite a lot of air bag failures. There are software defects, configuration management errors, leaving the module in "factory mode" when shipped, and even EEPROM wearout. Overall this paints a picture of an industry that is shipping a lot of safety critical software defects.  In fairness, yes, these are all ones that are being fixed, and there are certainly other causes of fatal accidents. (Presumably there are others not yet being fixed, if for no other reason than that the cars are still new on the road. But at least some of these recalls sure look like mistakes that simply should not be happening in life critical software.)

The list is almost certainly much, much longer, and I simply ran out of time trying to go through the full NHTSA database.  And even that doesn't include everything that happens. The list is heavy in 2013-2015 mostly because that was the most convenient source material I found. There is no reason whatsoever to believe things have gotten dramatically better since then.

The purpose of this list is not to call out any particular company or software defect. Rather, the point is that safety critical software defects are both pervasive and persistent across the automotive industry.  Yes, we can have discussions about how many vehicles vs. how many defects. But it still does not instill confidence about life critical software in a self-certifying industry that in the US is not required to follow international software safety standards.
  • "Automatic braking systems in some Nissan Rogues are going rogue, safety group says" / Mar 2019
  • "Alfa Romeo recalling 60,000 vehicles to repair cruise management fault" / Mar 2019
  • "Ford recalls 1.5 million Ford Focus cars that could stall with fuel tank problem" / Oct 2018
  • "Toyota recalls trucks, SUVs and cars to fix air bag problem" / Oct 2018
    • "Toyota says the air bag control computer can erroneously detect a fault when the vehicles are started. With a fault, the air bags may not deploy in a crash. The company wouldn't say if the problem has caused any injuries."
    • https://www.abc57.com/news/toyota-recalls-trucks-suvs-and-cars-to-fix-air-bag-problem
  • "Toyota isssues second prius recall in a month on crash risk" / Oct 2018
  • "Safety systems may be disabled when in use" (Mitsubishi) / Sept. 2018
    • "Inappropriate" software in the hydraulic ECU causes the pump to generate electrical noise that resets the ECU. That reset can cause: automatic braking to be cancelled, wheels lock momentarily, stability control to be momentarily cancelled, release break of brake auto-hold is active.
    • NHTSA recall 18V-621
  • "GM recalls more than 1M pickups, SUVs for power steering problem" / Sept. 2018
    • 30 crashes; two injuries, no deaths attributed
    • Voltage drop and return causes momentary power steering failure; fixed via software update
    • https://www.freep.com/story/money/cars/general-motors/2018/09/13/gm-recall-pickups-suvs-power-steering/1287911002/
  • "Expert investigation says BMW software to blame" / Aug 2018
  • "Fiat Chrysler recalls 5.3 million vehicles for cruise control defect" / May 2018
  • Incorrect Speed Limitation Software (Mercedes-Benz) / 2018
    •  These vehicles may be equipped with the incorrect reverse speed limitation software. While in reverse, any abrupt changes in steering while exceeding 16 MPH may cause the vehicle to become unstable.
    • NHTSA recall 18V-457
  • Cruise control may not disengage (Mercedes-Benz) / 2017
    • ESP software malfunction may cause engine not to reduce power regardless of speed, driving situation, or brake application.
    • NHTSA recall 17V-713
  • "Fiat Chrysler recalls 1.25 million trucks over software error" / 2017
  • Unintended vehicle movement (Ford) / 2017
    • Quick movement of gear shift can cause up to 1 second selection of reverse gear when shifting into intended drive (forward) gear.
    • NHTSA recall 17V-669
  • Air bags may not deploy in a crash (Mitsubishi) / 2017
    • SRS ECU misinterprets vibrations, disabling air bags from deploying in a crash
    • NHTSA recall 17V-686
  • Inadvertent Side Air Bag Deployment (Chrysler) / 2015
    • Unexpected side airbags may unexpectedly deploy due to incorrect software calibration; may result in crash or injury
    • NHTSA Recall 15V-460 and 15V-467
  • Radio Software Security Vulnerabilities (Chrysler) / 2015
    • Exploitation of the software vulnerability may result in unauthorized remote modification and control of certain vehicle systems, increasing the risk of a crash.
    • NHTSA Recall 15V-461, 15V-508
  • "Toyota recalls 625,000 hybrids: Software bug kills engines dead with thermal overload" / July 2015
    • Software settings for motor/generator ECU cause thermal damage, then propulsion shutdown
    • https://www.theregister.co.uk/2015/07/15/toyota_recalls_625000_hybrids_over_enginekilling_software_glitch/
    • Note previous recall 14V-053 for similar sounding problem
  • Tire pressure monitoring system message (Ferrari) / 2015
    • TPMS displays 50 mph speed limit warning instead of "do not proceed" warning due to software defect. Driving on punctured tire would cause loss of vehicle control and crash.
    • NHTSA Recall 15V-306
  • Airbag Incorrect Deployment Timing (BMW) / 2015
    • Driver front air bag timing incorrect / fails to meet FMVSS 208 due to programming error
    • NHTSA Recall 15V-148 
  • Passenger Air Bag may be disabled (Jaguar) / 2015
    • Light weight adult may be misclassified, disabling air bag
    • NHTSA Recall 15V-093
  • Unintended side air bag deployment (Chrysler) / 2015
    • Unintended side curtain and seat air bag deployment during operation / software reflash
    • NHTSA Recall 15V-041
  • Brake controller might not activate trailer brakes (Ford) / 2015
    • Trailer brakes not activated when towing, lengthening stopping distance, increasing risk of crash. Fixed via powertrain control module reflash.
    • NHTSA Recall 15V-710
  • On but unattended vehicle may cause CO poisoning (GM) / 2015
    • Vehicle may turn on gasoline engine to recharge hybrid battery, causing carbon monoxide poisoning (e.g., if car is in garage)
    • NHTSA Recall 15V-145
  • Incorrect electric power steering software setting (Jaguar) / 2015
    • Power steering set in factory operating mode. Vehicle can experience additional steering inputs from EPS causing driver to lose ability to control the vehicle.
    • NHTSA Recall 15V-569
  • Air bag may not detect passenger in seat (Nissan) / 2015
    • Configuration management error: incorrect occupant classification software version installed, resulting in no air bag deployment
    • NHTSA Recall 15V-681
  • "Honda admits software problem, recalls 175,000 hybrids" / July 2014
  • Transmission calibration error (Ford) / 2014
    • Due to software calibration error vehicle may be in and display "drive" but engage "reverse" for 1.5 seconds.
    • NHTSA Recall 14V-204
  • Headlights may unintentionally turn off (Motor Coach Industries) / 2014
    • A mux controller may unintentionally turn off headlights while vehicle is in gear
    • NHTSA Recall 14V-370
  • Brake vacuum pump may stop functioning (Mitsubishi) / 2014
    • Software defect causes false detection of stuck relay, disabling brake power assist
    • NHTSA Recall 14V-522
  • Loss of brake vacuum assist (GM) / 2014
    • Loss of power brake assist; fixed with software reflash
    • NHTSA Recall 14V-247
  • Reprogram sensing and diagnostics module (GM) / 2014
    • Module left in "manufacturing mode" when shipped, disabling airbags
    • NHTSA Recall 14V-247
  • Passenger airbag may be disabled (Jaguar) / 2014
    • EEPROM wearout (which is due to a software defect) causes airbag to be partially or totally disabled
    • NHTSA Recall 14V-395
  • Hybrid transmission software (Champion Bus) / 2014
    • Software may improperly raise vehicle's engine speed during downshifts without the driver's input. The increase in speed may result in unintended acceleration.
    • NHTSA Recall 14V-303  (See also 14V-043; 14V-043 Navistar; 14V-026 Kenworth)
  • Cruise control unintended continued acceleration (Chrysler) / 2014
    • Unintended continued acceleration after releasing accelerator due to adaptive cruise control software; may increase risk of crash
    • NHTSA Recall 14V-293
  • Side-curtain rollover airbag deployment delay (Ford) / 2014
    • Errors in the programming software which may result in delayed deployment of side-curtain rollover airbag
    • NHTSA Recall 14V-237
  • Improper seat belt restraint software (Toyota) / 2014
    • Improper software can use insufficient force in crash (e.g., 110 pound passenger force for larger passenter)
    • NHTSA Recall 14V-272
  • Air bag may not detect passenger in seat (Nissan) / 2014
    • Software may incorrectly classify passenger seat as empty; airbag will not deploy
    • NHTSA Recall 14V-138
  • Vehicle may gradually accelerate unexpectedly (Nissan) / 2014
    • If lost signal from throttle position sensor is regained (intermittent fault) fail-safe mode is deactiveted, opening throttle resulting in "gradual" acceleration due to software error.
    • NHTSA Recall 14V-583
  • Inadvertent Air Bag deployment (Ram) / 2014
    • Side air bags deploy when hitting potholes; fixed via software update
    • NHTSA Recall 14V-528
  • Side airbags may deploy on the incorrect side (Chrysler) / 2013
    • Airbag on the wrong side of the vehicle could deploy, leaving occupants with no airbag protection at point of impact due to a software defect
    • NHTSA Recall 13V-283
  • Delayed deployment or non-deployment of airbags (Chrysler/Jeep) / 2013
    • Airbag deployment delayed or no airbag deployment in rollover due to software defect
    • NHTSA Recall 13V-233
  • Airbag deployment software (Chrysler) / 2013
    • Incorrect software installed; air bags may not deploy or might deploy improperly
    • NHTSA Recall 13V-291
  • Improper occupant classification / 2012
    • Incorrect software installed that misclassifies passengers; airbag might not deploy when it should, deploys incorrectly, or deploys when it should not
    • NHTSA Recall 12V-198
  • Occupant classification system (Hyundai) / 2012
    • Software might miss small stature adults and not deploy airbag.
    • NHTSA Recall 12V-354 
  • Cruise Control System/Brake Switch Failure (Mercedes-Benz) / 2011
    • Brake pedal may not automatically disengage cruise control as expected. (Other methods still work.)  If driver pumps brakes it will take unusually high force to stop vehicle.
    • NHTSA Recall 11V-208
  • Engine stall prevention assist software (Honda) / 2011
    • Unexpected vehicle movement from ECU software providing hybrid electric power and unexpectedly moving vehicle in reverse direction if the engine stalls.
    • NHTSA Recall 11V-458
  • Loss of steering power assist (Toyota) / 2010
  • "Toyota: software to blame for Prius brake problems" / 2010
  • ABS ECU Programming (Toyota) / 2010
    • Inconsistent brake feel; increased stopping distances for a given pedal force due to ABS programming, raising the possibility of a crash.
    • NHTSA Recall 10V-039
  • Restraint control module (Land Rover) / 2009
    • Passenger airbag disabled as a result of temporary loss of CAN network messages and a software defect
    • NHTSA Recall 09V-467
  • Double Clutch Gearbox (BMW) / 2008
    • Engine stall increasing risk of a crash due to software multistage downshift defect
    • NHTSA Recall 08V-595
  • Passenger sensing system (GM) / 2008
    • Software condition within passenger sensing system may disable passenger air bag (or enable when it should be disabled).
    • NHTSA Recall 08V-582
  • Passenger air bag fail to deploy (Nissan) / 2008
    • Passenger air bag might not deploy due to low battery voltage combined with software defect
    • NHTSA Recall 08V-066
  • Engine Control Module Software Update (VW) / 2008
    • Software defect can cause unexpected engine surge that can "result in a crash without warning."
    • NHTSA Recall 08V-235
  • SRS Electronic control unit software (Maserati) / 2007
    • Passenger air bag might not deploy if car battery is not fully charged due to software defect
    • NHTSA Recall 07V-550
  • SRS control unit software (Volvo) / 2007
    • Two software errors result in late deployment of side airbags
    • NHTSA Recall 07V-500
  • Passenger side airbag does not deploy (Volkswagen) / 2006
    • A weak battery could cause air bag control unit to deactivate due to a software defect; airbag will not deploy in a crash
    • NHTSA Recall 06V-454
  • Electronic Throttle Control (GM) / 2006
    • ETC torque monitoring failsafe disabled, permitting throttle opening greater than commanded (i.e., UA) due to a software defect
    • NHTSA Recall 06V-007
  • Powertrain control module (DaimlerChrysler) / 2006
    • Software can cause momentary lock up of drive wheels at speeds over 40 mph if operator shifts from drive to neutral and back.
    • NHTSA Recall 06V-341
  • BMW/Driver's seat occupant detection system / 2004
    • Software can't reliably determine if driver seat is occupied; airbag may not deploy.
    • NHTSA Recall 04V-379
  • Jaguar/Forward drive gear / 2004
    • Selecting forward drive gear could select reverse while in forward motion, without indication. (Apparent limp home mode logic defect.)
    • NHTSA Recall 04-024
  • BMW/ENgine Idle Speed/DME Idle Control / 2003
    • Increase of idle speed up to 1,300 RPM. If a gear is selected, the driver may feel as if the vehicle is being pushed.
    • NHTSA Recall 03V124
  • KIA/ABS Electronic Control Module / 2003
    • A programming error in ABS cases reduced braking force at speeds below 25 mph, extending stopping distances
    • NHTSA Recall 03V-158
  • "GM Admits Brake Flaws After Inquiry" / July 1999
  • Chrysler/Interior systems: air bag / 1996
    • Air bag software error which can delay air bag deployment
    • NHTSA Recall 96V-060

    Noteworthy: These are software-related problems with cars that are worth knowing about, but less black and white because, for example, there has been no general recall issued.
    Notes:  
    • To access NHTSA recalls you need to visit https://www.nhtsa.gov/recalls then select Vehicle then select "search by NHTSA ID" which can take a few mouse clicks to find on the indicated NHTSA web site.  (It might be the interface has changed since I posted this; you might need to poke around to find the lookup function.)
    • This is a work in progress and a VERY incomplete list.  I thought this would be a one-day exercise, but, well, no. If you know of something really important I've missed, please let me know!  More importantly, if you know of someone who is interested in maintaining a list like this, especially as a more rigorous academic study, I'd be happy to collaborate.  I simply don't have the time to keep up with this.
    • Reasonable people can perhaps disagree about the inclusion or exclusion of some items. But the point is really more about the volume rather than any individual item. By definition each recall is a defect that should not have been shipped, because it resulted in a recall.  I've paraphrased the recall reports. If you want to know more be sure to look at the supporting documents on the NHTSA web site, which often have more details than the summaries.
    • To be "deadly" these defects have to be software faults that either have caused, could reasonably cause, or should have reasonably prevented significant injury or death. (This includes defects in failsafes, for example) A partial list includes: un-commanded acceleration (UA), stalling at speed (dangerous when merging onto a highway), failure to deactivate cruise control, extended braking distances, airbag disablement, and incorrect airbag deployment.  What happens in practice depends upon the circumstances.
    • This should not be construed to be an expert opinion of root cause of any particular mishap. I am summarizing publicly available information and have not independently verified the technical facts in each case. Those public sources might be incorrect, or I might not have fully understood the implications of the statements in those sources. Again, this is more about the overall trend and not any particular incident report.
    • There are plenty of commenters who say things for unintended acceleration like "just apply the brakes, because brakes always overcome the engine." First, this is simply not true in many situations due to loss of vacuum assist, drivers with weak leg strength etc. A single point fault or sufficiently likely multi-point fault should not be trying to kill the occupants in the first place, so it's still a defect.
    • The air bag software problems were found in: https://www.autosafety.org/staging/wp-content/uploads/import/Historical%20Airbag%20Recalls_1.pdf  I independently verified them on the NHTSA database.
    • I independently verified on the NHTSA database some drivetrain recalls found here: https://www.autosafety.org/sites/default/files/imce_staff_uploads/Exemplary%20Vehicle%20Software%20Recalls.pdf
      and here: https://www.autosafety.org/wp-content/uploads/2016/04/2014-15-Software-Recalls.pdf
    • If you want to go exploring, you can download a copy of the raw database here that I used for some of the other defects: https://www-odi.nhtsa.dot.gov/downloads/

    i like this art: Lorena Molina

    Lorena Molina

    Work from How Blue?.

    “The textile history in El Salvador is complex and embedded with the genocides and persecution of indigenous people. It is also tied to the 12 year civil war, and the ways that globalization and capitalism affect communities and traditional practices. By layering photographs made in El Salvador with fabric that remind me of my childhood dresses sewn by my grandmother, the photographs build new sites for longing and remembering. I am making connections between the disappearance of this skill to my displacement of home.” – Lorena Molina

    OUR VALUED CUSTOMERS: Internet shaped like a man...


    churchturing.org / 2019-04-19T18:33:33