Penny Arcade: Comic: Nite Time Is The Rite Time

New Comic: Nite Time Is The Rite Time

Bifurcated Rivets: From FB

Excellent

Bifurcated Rivets: From FB

Leyendecker

Bifurcated Rivets: From FB

Hard

Bifurcated Rivets: From FB

Interesting

Bifurcated Rivets: From FB

Excellent

Slashdot: Qualcomm Urges US Regulators To Reverse Course, Ban Some iPhones

An anonymous reader quotes a report from Reuters: Qualcomm is urging U.S. trade regulators to reverse a judge's ruling and ban the import of some Apple iPhones in a long-running patent fight between the two companies. Qualcomm is seeking the ban in hopes of dealing Apple a blow before the two begin a major trial in mid-April in San Diego over Qualcomm's patent licensing practices. Qualcomm has sought to apply pressure to Apple with smaller legal challenges ahead of that trial and has won partial iPhone sales bans in China and Germany against Apple, forcing the iPhone maker to ship only phones with Qualcomm chips to some markets. Any possible ban on iPhone imports to the United States could be short-lived because Apple last week for the first time disclosed that it has found a software fix to avoid infringing on one of Qualcomm's patents. Apple asked regulators to give it as much as six months to prove that the fix works. Qualcomm brought a case against Apple at the U.S. International Trade Commission in 2017 alleging that some iPhones violated Qualcomm patents to help smart phones run well without draining their batteries. Qualcomm asked for an import ban on some older iPhone models containing Intel chips. In September, Thomas Pender, an administrative law judge at the ITC, found that Apple violated one of the patents in the case but declined to issue a ban. Pender reasoned that imposing a ban on Intel-chipped iPhones would hand Qualcomm an effective monopoly on the U.S. market for modem chips, which connect smart phones to wireless data networks. Pender's ruling said that preserving competition in the modem chip market was in the public interest as speedier 5G networks come online in the next few years.

Read more of this story at Slashdot.

Recent additions: schematic 0.5.0.0

Added by dredozubov, Wed Feb 20 12:28:28 UTC 2019.

JSON-biased spec and validation tool

Recent additions: record 0.3.2.1

Added by NikitaVolkov, Wed Feb 20 12:21:19 UTC 2019.

First class records implemented with quasi-quotation

Open Culture: Haruki Murakami Announces an Archive That Will House His Manuscripts, Letters & Collection of 10,000+ Vinyl Records

Image by wakarimasita, via Wikimedia Commons

It has become the norm for notable writers to bequeath documents related to their work, and even their personal correspondence, to an institution that promises to maintain it all, in perpetuity, in an archive open to scholars. Often the institution is located at a university to which the writer has some connection, and the case of the Haruki Murakami Library at Tokyo's Waseda University is no exception: Murakami graduated from Waseda in 1975, and a dozen years later used it as a setting in his breakthrough novel Norwegian Wood.

That book's portrayal of Waseda betrays a somewhat dim view of the place, but Murakami looks much more kindly on his alma mater now than he did then: he must, since he plans to entrust it with not just all his papers but his beloved record collection as well. If you wanted to see that collection today, you'd have to visit him at home. "I exchanged my shoes for slippers, and Murakami took me upstairs to his office," writes Sam Anderson, having done just that for a 2011 New York Times Magazine profile of the writer. "This is also, not coincidentally, the home of his vast record collection. (He guesses that he has around 10,000 but says he’s too scared to count.)"

Having announced the plans for Waseda's Murakami Library at the end of last year, Murakami can now rest assured that the counting will be left to the archivists. He hopes, he said at a rare press conference, "to create a space that functions as a study where my record collection and books are stored." In his own space now, he explained, he has "a collection of records, audio equipment and some books. The idea is to create an atmosphere like that, not to create a replica of my study." Some of Murakami's stated motivation to establish the library comes out of convictions about the importance of "a place of open international exchanges for literature and culture" and "an alternative place that you can drop by." And some of it, of course, comes out of practicality: "After nearly 40 years of writing, there is hardly any space to put the documents such as manuscripts and related articles, whether at my home or at my office."

"I also have no children to take care of them," Murakami added, "and I didn’t want those resources to be scattered and lost when I die." Few of his countless readers around the world can imagine that day coming any time soon, turn 70 though Murakami did last month, but many are no doubt making plans even now for a trip to the Waseda campus to see what shape the Murakami Library takes during the writer's lifetime, especially since he plans to take an active role in what goes on there. "Murakami is also hoping to organize a concert featuring his collection of vinyl records," notes The Vinyl Factory's Gabriela Helfet. Until he does, you can have a listen to the playlists, previously featured here on Open Culture, of 96 songs from his novels and 3,350 from his record collection — but you'll have to recreate the atmosphere of his study yourself for now.

Related Content:

A 3,350-Song Playlist of Music from Haruki Murakami’s Personal Record Collection

A 96-Song Playlist of Music in Haruki Murakami’s Novels: Miles Davis, Glenn Gould, the Beach Boys & More

A 26-Hour Playlist Featuring Music from Haruki Murakami’s Latest Novel, Killing Commendatore

Stream Big Playlists of Music from Haruki Murakami’s Personal Vinyl Collection and His Strange Literary Worlds

Haruki Murakami’s Passion for Jazz: Discover the Novelist’s Jazz Playlist, Jazz Essay & Jazz Bar

Haruki Murakami Became a DJ on a Japanese Radio Station for One Night: Hear the Music He Played for Delighted Listeners

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

Haruki Murakami Announces an Archive That Will House His Manuscripts, Letters & Collection of 10,000+ Vinyl Records is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Recent additions: template-haskell-compat-v0208 0.1.1.1

Added by NikitaVolkov, Wed Feb 20 11:57:56 UTC 2019.

A backwards compatibility layer for Template Haskell newer than 2.8

Recent additions: simple-cmd 0.1.3

Added by JensPetersen, Wed Feb 20 11:56:54 UTC 2019.

Simple String-based process commands

Slashdot: Apple To Target Combining iPhone, iPad and Mac Apps by 2021: Report

Mark Gurman, reporting for Bloomberg: Apple wants to make it easier for software coders to create tools, games and other applications for its main devices in one fell swoop -- an overhaul designed to encourage app development and, ultimately, boost revenue. The ultimate goal of the multistep initiative, code-named "Marzipan," is by 2021 to help developers build an app once and have it work on the iPhone, iPad and Mac computers, said people familiar with the effort. That should spur the creation of new software, increasing the utility of the company's gadgets. Later this year, Apple plans to let developers port their iPad apps to Mac computers via a new software development kit that the company will release as early as June at its annual developer conference. Developers will still need to submit separate versions of the app to Apple's iOS and Mac App Stores, but the new kit will mean they don't have to write the underlying software code twice, said the people familiar with the plan. In 2020, Apple plans to expand the kit so iPhone applications can be converted into Mac apps in the same way. Further reading: Tim Cook, in April 2018: Users Don't Want iOS To Merge With MacOS.

Read more of this story at Slashdot.

Slashdot: Proposed Bill Would Force Arizonians To Pay $250 To Have Their DNA Added To a Database

technology_dude writes: One by one, thresholds are being crossed where the collection and storage of personal data is accepted as routine. Being recorded by cameras at business locations, in public transportation, in schools, churches, and every other place imaginable. Recent headlines include "Singapore Airlines having cameras built into the seat back of personal entertainment systems," and "Arizona considering a bill to force some public workers to give up DNA samples (and even pay for it)." It seems to be a daily occurrence where we have crossed another line in how far we will go to accept massive surveillance as normal. Do we even have a line the sand that we would defend? Do we even see anything wrong with it? Absolute power corrupts absolutely and we continue to give knowledge of our personal lives (power) to others. If we continue down the same path, I suppose we deserve what we get? I want to shout "Stop the train, I want off!" but I fear my plea would be ignored. So who out there is more optimistic than I and can recommend some reading that will give me hope? Bill 1475 was introduced by Republican State Senator David Livingston and would require teachers, police officers, child day care workers, and many others to submit their DNA samples along with fingerprints to be stored in a database maintained by the Department of Public Safety. "While the database would be prohibited from storing criminal or medical records alongside the DNA samples, it would require the samples be accompanied by the person's name, Social Security number, date of birth and last known address," reports Gizmodo. "The living will be required to pay [a $250 processing fee] for this invasion of their privacy, but any dead body that comes through a county medical examiner's office would also be fair game to be entered into the database."

Read more of this story at Slashdot.

Recent additions: hackport 0.6

Added by solpeth, Wed Feb 20 09:59:03 UTC 2019.

Hackage and Portage integration tool

MetaFilter: Grand Canyon tourists exposed for years to radiation in museum building

For nearly two decades at the Grand Canyon, tourists, employees, and children on tours passed by three paint buckets stored in the National Park's museum collection building, unaware that they were being exposed to radiation. Although federal officials learned last year that the 5-gallon containers were brimming with uranium ore, then removed the radioactive specimens, the park's safety director alleges nothing was done to warn park workers or the public that they might have been exposed to unsafe levels of radiation.

Open Culture: The History of Ancient Rome in 20 Quick Minutes: A Primer Narrated by Brian Cox

Two thousand years ago, Rome was half the world. A thousand years before that, it was “a tiny tribal settlement of the Latins by the river Tiber.” So, what happened? An awful lot. But narrator Brian Cox makes the history and longevity of Ancient Rome seem simple in 20 minutes in the Arzamas video above, which brings the same talent for narrative compression as we saw in an earlier video we featured with Cox describing the history of Russian Art.

This is a far more sprawling subject, but it’s one you can absorb in 20 minutes, if you’re satisfied with very broad outlines. Or, like one YouTube commenter, you can spend six hours, or more, pausing for reading and research after each morsel of information Cox tosses out. The story begins with trade—cultural and economic—between the Latins and the Etruscans to the north and Greeks to the south. Rome grows by adding populations from all over the world, allowing migrants and refugees to become citizens.

Indeed, the great Roman epic, the Aeneid, relates its founding by refugees from Troy. From these beginnings come monumental innovations in building and engineering, as well as an alphabet that spread around the world and a language that spawned dozens of others. The Roman numeral system, an unwieldy way to do mathematics, nonetheless gave to the world the stateliest means of writing numbers. Rome gets the credit for these gifts to world civilization, but they originated with the Etruscans, along with famed Roman military discipline and style of government.

After Tarquin, the last Roman king, committed one abuse too many, the Republic began to form, as did new class divides. Plebs fought Patricians for expanded rights, Senatus Populusque Romanus (SPQR)—the senate and the people of Rome—expressed an ideal of unity and political equality, of a sort. An age of imperial war ensues, conquered peoples are ostensibly made allies, not colonials, though they are also made slaves and supply the legions with “a never ending supply of recruits.”

These sketches of major campaigns you may remember from your World Civ class: The Punic Wars with Carthage, and their commander Hannibal, conducted under the motto of Cato, the senator who beat the drums of war by repeating Carthago delenda est—Carthage must be destroyed. The conquering of Corinth and the absorption of Alexander’s Hellenist empire into Rome.

The story of the Empire resembles that of so many others: tales of hubris, ferocious brutality, genocide, and endless building. But it is also a story of political genius, in which, gradually, those peoples brought under the banners of Rome by force were given citizenship and rights, ensuring their loyalty. Relative peace—within the borders of Rome, at least—could not hold, and the Republic imploded in civil wars and the ruination of a slave economy and extreme inequality.

The wealthy gobbled up arable land. The tribunes of the people, the Gracchi brothers, suggested a redistribution scheme. The senators responded with force, killing thousands. Two mass-murdering conquering generals, Pompey and Julius Caesar, fought over Rome. Caesar crossed the Rubicon with his legions to take the city, assuming the title Imperator, a move that cost him his life.

But his murder didn’t stop the march of Empire. Under his nephew Augustus, a dictator who called himself a senator, Rome spread, flourished, and established a 200-year Pax Romana, a time of thriving arts and culture, popular entertainments, and a well-fed populace.

Augustus had learned from the Gracchi what neither the venal senatorial class nor so many subsequent emperors could. In order to rule effectively, you’ve got to have the people on your side, or have them so distracted, at least, by bread and circuses, that they won't bother to revolt. Watch the full video to learn about the next few hundred years, and learn more about Ancient Rome at the links below.

Related Content:

Play Caesar: Travel Ancient Rome with Stanford’s Interactive Map

Rome Reborn: Take a Virtual Tour of Ancient Rome, Circa 320 C.E.

An Interactive Map Shows Just How Many Roads Actually Lead to Rome

The Ups & Downs of Ancient Rome’s Economy–All 1,900 Years of It–Get Documented by Pollution Traces Found in Greenland’s Ice

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

MetaFilter: "following my path" would be a better translation

Parno Graszt - Járom az utam / ... In My World (2014) will cheer you up in the morning [SLYT]

Parno Graszt is a Hungarian/Romani folk band, playing "traditional gypsy music".

MetaFilter: Karl Lagerfeld, 1935-2019

Karl Lagerfeld, fashion designer who oversaw the transformation of Chanel into an intercontinental superbrand, has died aged 85.

Obituaries from:
Vogue
The Guardian
The New York Times

Disquiet: The “Electricity” of Colorist

A new Colorist album, Full Circle, was released late last year, and I somehow only just figured this out. I was trying today to recall the name of its phenomenal saxophonist, Charles Gorczynski, whose work I first came to appreciate when he was playing with the group Spinach Prince almost a decade back. (I’m horrible with names, but I’m very good with faces, and I’m even better with short video clips of people inventively playing saxophone and a large Monome grid in a live setting.)

As luck would have it (or not, in this case), I missed a different Gorczynski group perform by just a few days — but until the next show, there is Full Circle to enjoy. Colorist is the trio of Gorczynski working with Charles Rumback on drums and John Hughes (of Hefty Records) on “synthesizers / electronics.” This is their fourth album together. The threesome’s work draws from jazz and free improvisation, which they wield in the context of atmospheric electronic music. One highlight of the album is the sprawling-seeming yet fairly compact (under six minutes) “Electricity,” all tunnel ambience, swirling drones, emergent rhythms, and deep simpatico ensemble playing.

The album is available at serein.co.uk.

Slashdot: Neuroscientists Say They've Found An Entirely New Form of Neural Communication

Scientists think they've identified a previously unknown form of neural communication that self-propagates across brain tissue, and can leap wirelessly from neurons in one section of brain tissue to another -- even if they've been surgically severed. The discovery offers some radical new insights about the way neurons might be talking to one another, via a mysterious process unrelated to conventionally understood mechanisms, such as synaptic transmission, axonal transport, and gap junction connections. ScienceAlert reports: "We don't know yet the 'So what?' part of this discovery entirely," says neural and biomedical engineer Dominique Durand from Case Western Reserve University. "But we do know that this seems to be an entirely new form of communication in the brain, so we are very excited about this." To that end, Durand and his team investigated slow periodic activity in vitro, studying the brain waves in This neural activity can actually be modulated - strengthened or blocked - by applying weak electrical fields and could be an analogue form of another cell communication method, called ephaptic coupling. The team's most radical finding was that these electrical fields can activate neurons through a complete gap in severed brain tissue, when the two pieces remain in close physical proximity. slices extracted from decapitated mice. What they found was that slow periodic activity can generate electric fields which in turn activate neighboring cells, constituting a form of neural communication without chemical synaptic transmission or gap junctions. "To ensure that the slice was completely cut, the two pieces of tissue were separated and then rejoined while a clear gap was observed under the surgical microscope," the authors explain in their paper. "The slow hippocampal periodic activity could indeed generate an event on the other side of a complete cut through the whole slice." The findings are reported in The Journal of Physiology.

Read more of this story at Slashdot.

MetaFilter: Jasmin Paris wins the Spine Ultra Marathon in 83h 12m

Jasmin Paris smashed the old record by more than 12 hours . The Spine is a 268 mile long race from Kent to Scotland. She only slept for 3 hours and at each checkpoint she pumped milk for her daughter.

Via Alice Fraser's Tea with Alice Podcast.

MetaFilter: An Epitaph For A Place That Lives

peach.cool is an iPhone-only microblogging service that was founded in 2016. (Hackernoon, The Verge, The New York Times)
It is small and probably won't ever be big.
At The Verge, Bijan Stevens has a short note about the community's stress when the service went down for a few days.

Slashdot: FDA Warns Against Using Young Blood As Medical Treatment

An anonymous reader quotes a report from CNN: The U.S. Food and Drug Administration warned Tuesday against using plasma infusions from young blood donors to ward off the effects of normal aging as well as other more serious conditions. Plasma, the liquid portion of the blood, contains proteins that help clot blood. The infusions are promoted to treat a variety of conditions, including normal aging and memory loss as well as serious conditions such as dementia, multiple sclerosis, heart disease and post-traumatic stress disorder. "There is no proven clinical benefit of infusion of plasma from young donors to cure, mitigate, treat, or prevent these conditions, and there are risks associated with the use of any plasma product," FDA Commissioner Dr. Scott Gottlieb wrote in a statement Tuesday. "The reported uses of these products should not be assumed to be safe or effective," he added, noting that the FDA "strongly" discourages consumers from using this therapy "outside of clinical trials under appropriate institutional review board and regulatory oversight." Gottlieb said that "a growing number of clinics" are offering plasma from young donors and similar therapies, though he did not name any in particular.

Read more of this story at Slashdot.

Penny Arcade: News Post: Level99 Games In the Studio

Tycho: We have sponsored streams today, a couple of them more or less back to back, one for a game I own but have been too intimidated to learn and one for a game I have taught many people to play.  It’s all happening on our Twitch thing.  Thanks for stopping by, Level99! The first is Millennium Blades, a board game about collectible card games(?).  In the fiction, a CCG - also called Millennium Blades - has been in print for over a thousand years.  The game starts on pre-release night, and as the game progresses you collect cards, trade cards, enter tournaments, and try to…

ScreenAnarchy: CHOKEHOLD: Trailer And Tour Dates For Indie Action Flick Starring Casper Van Dien And Melissa Croden

Our friends at Ammo Entertainment have been touring Brian Skiba's indie action flick Chokehold this month. The tour will end in New York City on the 28th and though we are a bit behind on the schedule if you are a fan of MMA fighter crossovers into film then you may want to check out Chokehold and its star, MMA fighter Melissa Croden. Also of interest to any MLB fans out there, one of the film's executive producers is six-time all-star Kenny Lofton and he has been touring with the film. The final two stops of the tour are in Chicago on the 24th and 25th, then NYC on the 27th and 28th.  Chokehold will be available on VOD, TVOD and DVD on March 5th....

[Read the whole post on screenanarchy.com...]

Penny Arcade: News Post: Computer Stuff!

Gabe: The last time I really took PC gaming seriously was honestly around the time the comic strip started. So, about twenty years ago I guess. I can still remember keeping up to date on the latest improvements in Voodoo graphics cards. As time went by I played on consoles more and more until I had fallen off the PC entirely. I usually had a laptop or something that could run a game or two but I lost track of where the technology went. About a year ago I decided I wanted to give PC gaming another try and so I bought a computer for the sole purpose of playing games. We even made a comic about it at…

Daniel Lemire's blog: More fun with fast remainders when the divisor is a constant

In software, compilers can often optimize away integer divisions, and replace them with cheaper instructions, especially when the divisor is a constant. I recently wrote about some work on faster remainders when the divisor is a constant. I reported that it can be fruitful to compute the remainder directly, instead of first computing the quotient (as compilers are doing when the divisor is a constant).

To get good results, we can use an important insight that is not documented anywhere at any length: we can use 64-bit processor instructions to do 32-bit arithmetic. This is fair game and compilers could use this insight, but they do not do it systematically. Using this trick alone is enough to get substantial gains in some instances, if the algorithmic issues are just right.

So it is a bit complicated. Using 64-bit processor instructions for 32-bit arithmetic is sometimes useful. In addition, computing the remainder directly without first computing the quotient is sometimes useful. Let us collect a data point for fun and to motivate further work.

First let us consider how you might compute the remainder by leaving it up to the compiler to do the heavy lifting (D is a constant known to the compiler). I expect that the compiler will turn this code into a sequence of instructions over 32-bit registers:

uint32_t compilermod32(uint32_t a) {
  return a % D;
}

Then we can compute the remainder directly, using some magical mathematics and 64-bit instructions:

#define M ((uint64_t)(UINT64_C(0xFFFFFFFFFFFFFFFF) / (D) + 1))

uint32_t directmod64(uint32_t a) {
  uint64_t lowbits = M * a;
  return ((__uint128_t)lowbits * D) >> 64;
}

Finally, you can compute the remainder “indirectly” (by first computing the quotient) but using 64-bit processor instructions.

uint32_t indirectmod64(uint32_t a) {
  uint64_t quotient = ( (__uint128_t) M * a ) >> 64;
  return a - quotient * D;
}

As a benchmark, I am going to compute a linear congruential generator (basically a recursive linear function with a remainder thrown in), using these three approaches, plus the naive one. I use as a divisor the constant number 22, a skylake processor and the GNU GCC 8.1 compiler. For each generated number I measure the following number of CPU cycles (on average):

slow (division instruction) 29 cycles
compiler (32-bit) 12 cycles
direct (64-bit) 10 cycles
indirect (64-bit) 11 cycles

My source code is available.

Depending on your exact platform, all three approaches (compiler, direct, indirect) could be a contender for best results. In fact, it is even possible that the division instruction could win out in some cases. For example, on ARM and POWER processors, the division instruction does beat some compilers.

Where does this leave us? There is no silver bullet but a simple C function can beat a state-of-the-art optimizing compiler. In many cases, we found that a direct computation of the 32-bit remainder using 64-bit instructions was best.

ScreenAnarchy: PUNTO MUERTO (DEAD END): Stills And Teaser Revealed For 40s Style Murder Mystery Film From Argentina

The first teaser and a selection of images were passed along to us today for Daniel de la Vega’s new film Punto Muerto (Dead End).    Luis Penafiel is a writer who has just finished a novel that presents the perfect crime in a closed room. He has achieved the respect and admiration of all their fellow writers because of the surprising and dramatic resolution of his story. The same night, a writer will be killed, following the same pattern in its history. Peñafiel is accused of the crime, and to prove his innocence, he must find the real killer. In its effort to prove his not guilty, Peñafiel, you will discover that nothing is more confusing than an obvious fact...   Daniel de la...

[Read the whole post on screenanarchy.com...]

Quiet Earth: New on Blu-ray and DVD for February 19, 2019!

Overlord comes from jJ Abrams' Bad Robot and is truly badass. With only hours until D-Day, a team of American paratroopers drop into Nazi-occupied France to carry out a mission that’s crucial to the invasion's success.

Tasked with destroying a radio transmitter atop a fortified church, the desperate soldiers join forces with a young French villager to penetrate the walls and take down the tower. But, in a mysterious Nazi lab beneath the church, the outnumbered G.I.s come face-to-face with enemies unlike any the world has ever seen.

From producer J.J. Abrams, Overlord is a [Continued ...]

Colossal: Shapely Shadows Reimagined as Quirky Illustrations by Vincent Bal

The inspiration for the illustrated works of Belgian filmmaker and illustrator Vincent Bal (previously) come from the shadows cast by everyday objects and detritus from the world around him. Bits of trash and spare items from his home are reimagined as curvy outlines for a cast of characters that range from a young girl in a rainstorm to DJ in his flow. Other items, like a textured glass, create the perfect sun-spotted water for a backyard pool. Bal is currently in production for a live-action film that incorporates his shadow drawings called Shadowology. You can support the creation of the film on Cinecrowd, and see more of his animations on Instagram. Bal also offers prints of his illustrations on Etsy.

 

Open Culture: Gustave Doré’s Haunting Illustrations of Dante’s Divine Comedy

Inferno, Canto X:

Many artists have attempted to illustrate Dante Alighieri's epic poem the Divine Comedy, but none have made such an indelible stamp on our collective imagination as the Frenchman Gustave Doré.

Doré was 23 years old in 1855, when he first decided to create a series of engravings for a deluxe edition of Dante's classic.  He was already the highest-paid illustrator in France, with popular editions of Rabelais and Balzac under his belt, but Doré was unable to convince his publisher, Louis Hachette, to finance such an ambitious and expensive project. The young artist decided to pay the publishing costs for the first book himself. When the illustrated Inferno came out in 1861, it sold out fast. Hachette summoned Doré back to his office with a telegram: "Success! Come quickly! I am an ass!"

Hachette published Purgatorio and Paradiso as a single volume in 1868. Since then, Doré's Divine Comedy has appeared in hundreds of editions. Although he went on to illustrate a great many other literary works, from the Bible to Edgar Allan Poe's "The Raven," Doré is perhaps best remembered for his depictions of Dante. At The World of Dante, art historian Aida Audeh writes:

Characterized by an eclectic mix of Michelangelesque nudes, northern traditions of sublime landscape, and elements of popular culture, Doré's Dante illustrations were considered among his crowning achievements -- a perfect match of the artist's skill and the poet's vivid visual imagination. As one critic wrote in 1861 upon publication of the illustrated Inferno: "we are inclined to believe that the conception and the interpretation come from the same source, that Dante and Gustave Doré are communicating by occult and solemn conversations the secret of this Hell plowed by their souls, traveled, explored by them in every sense."

The scene above is from Canto X of the Inferno. Dante and his guide, Virgil, are passing through the Sixth Circle of Hell, in a place reserved for the souls of heretics, when they look down and see the imposing figure of Farinata degli Uberti, a Tuscan nobleman who had agreed with Epicurus that the soul dies with the body, rising up from an open grave. In the translation by John Ciardi, Dante writes:

My eyes were fixed on him already. Erect,
he rose above the flame, great chest, great brow;
he seemed to hold all Hell in disrespect

Inferno, Canto XVI:

As Dante and Virgil prepare to leave Circle Seven, they are met by the fearsome figure of Geryon, Monster of Fraud. Virgil arranges for Geryon to fly them down to Circle Eight. He climbs onto the monster's back and instructs Dante to do the same.

Then he called out: "Now, Geryon, we are ready:
bear well in mind that his is living weight
and make your circles wide and your flight steady."

As a small ship slides from a beaching or its pier,
backward, backward -- so that monster slipped
back from the rim. And when he had drawn clear

he swung about, and stretching out his tail
he worked it like an eel, and with his paws
he gathered in the air, while I turned pale.

Inferno, Canto XXXIV:

In the Ninth Circle of Hell, at the very center of the Earth, Dante and Virgil encounter the gigantic figure of Satan. As Ciardi writes in his commentary:

He is fixed into the ice at the center to which flow all the rivers of guilt; and as he beats his great wings as if to escape, their icy wind only freezes him more surely into the polluted ice. In a grotesque parody of the Trinity, he has three faces, each a different color, and in each mouth he clamps a sinner whom he rips eternally with his teeth. Judas Iscariot is in the central mouth: Brutus and Cassius in the mouths on either side.

 Purgatorio, Canto II:

At dawn on Easter Sunday, Dante and Virgil have just emerged from Hell when they witness The Angel Boatman speeding a new group of souls to the shore of Purgatory.

Then as that bird of heaven closed the distance
between us, he grew brighter and yet brighter
until I could no longer bear the radiance,

and bowed my head. He steered straight for the shore,
his ship so light and swift it drew no water;
it did not seem to sail so much as soar.

Astern stood the great pilot of the Lord,
so fair his blessedness seemed written on him;
and more than a hundred souls were seated forward,

singing as if they raised a single voice
in exitu Israel de Aegypto.
Verse after verse they made the air rejoice.

The angel made the sign of the cross, and they
cast themselves, at his signal, to the shore.
Then, swiftly as he had come, he went away.

 Purgatorio, Canto IV:

The poets begin their laborious climb up the Mount of Purgatory. Partway up the steep path, Dante cries out to Virgil that he needs to rest.

The climb had sapped my last strength when I cried:
"Sweet Father, turn to me: unless you pause
I shall be left here on the mountainside!"

He pointed to a ledge a little ahead
that wound around the whole face of the slope.
"Pull yourself that much higher, my son," he said.

His words so spurred me that I forced myself
to push on after him on hands and knees
until at last my feet were on that shelf.

Purgatorio, Canto XXXI:

Having ascended at last to the Garden of Eden, Dante is immersed in the waters of the Lethe, the river of forgetfulness, and helped across by the maiden Matilda. He drinks from the water, which wipes away all memory of sin.

She had drawn me into the stream up to my throat,
and pulling me behind her, she sped on
over the water, light as any boat.

Nearing the sacred bank, I heard her say
in tones so sweet I cannot call them back,
much less describe them here: "Asperges me."

Then the sweet lady took my head between
her open arms, and embracing me, she dipped me
and made me drink the waters that make clean.

Paradiso, Canto V:

In the Second Heaven, the Sphere of Mercury, Dante sees a multitude of glowing souls. In the translation by Allen Mandelbaum, he writes:

As in a fish pool that is calm and clear,
the fish draw close to anything that nears
from outside, it seems to be their fare,
such were the far more than a thousand splendors
I saw approaching us, and each declared:
"Here now is one who will increase our loves."
And even as each shade approached, one saw,
because of the bright radiance it set forth,
the joyousness with which that shade was filled.

Paradiso, Canto XXVIII:

Upon reaching the Ninth Heaven, the Primum Mobile, Dante and his guide Beatrice look upon the sparkling circles of the heavenly host. (The Christian Beatrice, who personifies Divine Love, took over for the pagan Virgil, who personifies Reason, as Dante's guide when he reached the summit of Purgatory.)

And when I turned and my own eyes were met
By what appears within that sphere whenever
one looks intently at its revolution,
I saw a point that sent forth so acute
a light, that anyone who faced the force
with which it blazed would have to shut his eyes,
and any star that, seen from the earth, would seem
to be the smallest, set beside that point,
as star conjoined with star, would seem a moon.
Around that point a ring of fire wheeled,
a ring perhaps as far from that point as
a halo from the star that colors it
when mist that forms the halo is most thick.
It wheeled so quickly that it would outstrip
the motion that most swiftly girds the world.

Paradiso, Canto XXXI:

In the Empyrean, the highest heaven, Dante is shown the dwelling place of God. It appears in the form of an enormous rose, the petals of which house the souls of the faithful. Around the center, angels fly like bees carrying the nectar of divine love.

So, in the shape of that white Rose, the holy
legion has shown to me -- the host that Christ,
with His own blood, had taken as His bride.
The other host, which, flying, sees and sings
the glory of the One who draws its love,
and that goodness which granted it such glory,
just like a swarm of bees that, at one moment,
enters the flowers and, at another, turns
back to that labor which yields such sweet savor,
descended into that vast flower graced
with many petals, then again rose up
to the eternal dwelling of its love.

You can access a free edition of The Divine Comedy featuring Doré's illustrations at Project Gutenberg. A Yale course on reading Dante in translation appears in the Literature section of our collection of 750 Free Online Courses.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you'd like to support Open Culture and our mission, please consider making a donation to our site. It's hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

Note: An earlier version of this post appeared on our site in October 2013.

Related Content:

An Illustrated and Interactive Dante’s Inferno: Explore a New Digital Companion to the Great 14th-Century Epic Poem

Visualizing Dante’s Hell: See Maps & Drawings of Dante’s Inferno from the Renaissance Through Today

Artists Illustrate Dante’s Divine Comedy Through the Ages: Doré, Blake, Botticelli, Mœbius & More

A Digital Archive of the Earliest Illustrated Editions of Dante’s Divine Comedy (1487-1568)

Alberto Martini’s Haunting Illustrations of Dante’s Divine Comedy (1901-1944)

Gustave Doré’s Haunting Illustrations of Dante’s Divine Comedy is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

ScreenAnarchy: Netflix Creates Production Hub in Toronto, Renting Space, Creating Jobs

The other week the president of Canada's national broadcaster went off script during a panel discussion to comment on Netflix's growing influence the true north, strong and free. They went as far as comparing that influence to the colonialism of past empires. It was pretty awesome (sarcasm).    Well, if you happen to be down on Front St today and you see them running between the cars, screaming, 'The End is Nucking Figh', here is why.    Netflix has announced the creation of a production hub right here in Toronto, just down the road from our national broadcaster, leasing space from Pinewood Studios and Cinespace. That will be a total of nearly 250,000 square feet and provide jobs for up to 1,850 Canadians per year...

[Read the whole post on screenanarchy.com...]

ScreenAnarchy: "13: Fanboy" - Some fans love you to death - Seeking investors!

"13: Fanboy" is quite a feat in horror, directed by Deborah Voorhees ("Friday the 13th: A New Beginning") while written by Joel Paul Reisig and Deborah Vorhees, boasts quite a few famous horror names among the cast: Corey Feldman, Kane Hodder, Judie Aronson, Thom Mathews, C.J. Graham, Jennifer Banko, Tracie Savage, Timothy Skyler Dunigan, Vincente DiSanti, Ron Sloan, Drew Leighty, Hayley Greenbauer, Donald Schell, Rachael Christenson and Brad M. Robinson - and maybe you! Yes, maybe you too can get involved? How you ask, well, let's leave it to the producers to explain: We have now gone INDEMAND so people who couldn't contribute during the initial run can now help support the upcoming production of the film by securing some exclusive perks only available in...

[Read the whole post on screenanarchy.com...]

ScreenAnarchy: Interview: ANTIQUITIES director DANIEL CAMPBELL on the making of the film

“Antiquities” is a charmingly honest family drama, the kind of simple, sincere movie that ends up winning over its audience, despite all its evident flaws. It’s honest enough, funny enough, and believable enough, for one to enjoy it quite thoroughly. I had the opportunity to exchange e-mails with the film’s director, Daniel Campbell. What follows is a brief interview in which he tells me a little about the pre-production process of the movie, as well as the personal experiences that inspired this particular story.   - What attracted you to this particular story, in order to make your first feature film?   With "Antiquities" being so personal to me, it's no coincidence that it was my first short and first feature. When I made the...

[Read the whole post on screenanarchy.com...]

Colossal: Dozens of Expressive Puppets Encourage Kindness and Acceptance in a Series of Sing-A-Long Short Films

Irish director and animator Johnny Kelly (previously) is known for his puppet-based films, most notably his 2011 piece for Chipotle titled Back to the Start. His most recent project, Right on Tracks, is a series of short sing-a-long videos for Cheerios. Kelly worked with the art collective Nous Vous and Andy Gent, who was also the lead of the puppets department for Isle of Dogs.

The catchy anthems have an inclusive message that focuses on building confidence in yourself while practicing kindness to all. Walter Martin of The Walkmen created songs such as Just Be You which teaches acceptance of your own quirks and unique traits, and It’s All Family which showcases a look at familial structures in a much broader light than we typically see on TV.

“We wanted to show diversity,” Kelly told It’s Nice That. “Nous Vous’ characters are so otherworldly and abstract that they could be anyone and everyone. It was important that people empathize with them too. With such simple designs, you can read a little more into their expressions, project your own loneliness onto a lonely character, or warmth onto a happy character.”

The cast of puppets are large, small, and every size in-between, with characteristics that range from colorful tufts of hair to necks that extend out like tree branches. You can take a behind-the-scenes peek at how Kelly created the four-part series in the video below, and view more of his short films on his website and Vimeo. (via The Kid Should See This, It’s Nice That)

Quiet Earth: Did You Love ALITA: BATTLE ANGEL? Watch the 90's Anime Here

Once again the critics prove they're out of touch with huge swaths of the fandom. Anime and Manga fans are absolutely raving about Robert Roderiguez and James Cameron's adaptation of Yukito Kishiro's original Battle Angel Alita Manga series.

While the film's Rotten Tomatoes score has a 59% by critics, audiences have it pegged at a massive 93% in contrast. That's an interesting divide.

Personally, I was a HUGE fan of the film as an example of a Cyberpunk movie that manages to be both a mainstream crowd pleaser AND thoughtful. It's a straightforward origin story with universal overtones, but I don't knock the film for leaning into its a simple story. There so much else going on in the world of the film that the simple story allows you to appreciate it all the more IMO. [Continued ...]

The Geomblog: OpenAI, AI threats, and norm-building for responsible (data) science

All of twitter is .... atwitter?... over the OpenAI announcement and partial non-release of code/documentation for a language model that purports to generate realistic-sounding text from simple prompts. The system actually addresses many NLP tasks, but the one that's drawing the most attention is the deepfakes-like generation of plausible news copy (here's one sample).

Most consternation is over the rapid PR buzz around the announcement, including somewhat breathless headlines (that OpenAI is not responsible for) like

OpenAI built a text generator so good, it’s considered too dangerous to release
or
Researchers, scared by their own work, hold back “deepfakes for text” AI
There are concerns that OpenAI is overhyping solid but incremental work, that they're disingenuously allowing for overhyped coverage in the way they released the information, or worse that they're deliberately controlling hype as a publicity stunt.

I have nothing useful to add to the discussion above: indeed, see posts by Anima Anandkumar, Rob MunroZachary Lipton  and Ryan Lowe for a comprehensive discussion of the issues relating to OpenAI.  Jack Clark from OpenAI has been engaging in a lot of twitter discussion on this as well.

But what I do want to talk about is the larger issues around responsible science that this kerfuffle brings up. Caveat, as Margaret Mitchell puts it in this searing thread.

To understand the kind of "norm-building" that needs to happen here, let's look at two related domains.

In computer security, there's a fairly well-established model for finding weaknesses in systems. An exploit is discovered, the vulnerable entity is given a chance to fix it, and then the exploit is revealed , often simultaneously with patches that rectify it. Sometimes the vulnerability isn't easily fixed (see Meltdown and Spectre). But it's still announced.

A defining characteristic of security exploits is that they are targeted, specific and usually suggest a direct patch. The harms might be theoretical, but are still considered with as much seriousness as the exploit warrants.

Let's switch to a different domain: biology. Starting from the sequencing of the human genome through the million-person precision medicine project to CRISPR and cloning babies, genetic manipulation has provided both invaluable technology for curing disease as well as grave ethical concerns about misuse of the technology. And professional organizations as well as the NIH have (sometimes slowly) risen to the challenge of articulating norms around the use and misuse of such technology.

Here, the harms are often more diffuse, and the harms are harder to separate from the benefits. But the harm articulation is often focused on the individual patient, especially given the shadow of abuse that darkens the history of medicine.

The harms with various forms of AI/ML technology are myriad and diffuse. They can cause structural damage to society - in the concerns over bias, the ways in which automation affects labor, the way in which fake news can erode trust and a common frame of truth, and so many others - and they can cause direct harm to individuals. And the scale at which these harms can happen is immense.

So where are the professional groups, the experts in thinking about the risks of democratization of ML, and all the folks concerned about the harms associated with AI tech? Why don't we have the equivalent of the Asilomar conference on recombinant DNA?

I appreciate that OpenAI has at least raised the issue of thinking through the ethical ramifications of releasing technology. But as the furore over their decision has shown, no single imperfect actor can really claim to be setting the guidelines for ethical technology release, and "starting the conversation" doesn't count when (again as Margaret Mitchell points out) these kinds of discussions have been going on in different settings for many years already.

Ryan Lowe suggests workshops at major machine learning conferences. That's not a bad idea. But it will attract the people who go to machine learning conferences. It won't bring in the journalists, the people getting SWAT'd (and one case killed) by fake news, the women being harassed by trolls online with deep-fake porn images. 

News is driven by news cycles. Maybe OpenAI's announcement will lead to us thinking more about issues of responsible data science. But let's not pretend these are new, or haven't been studied for a long time, or need to have a discussion "started".


Open Culture: Watch Bauhaus World, a Free Documentary That Celebrates the 100th Anniversary of Germany’s Legendary Art, Architecture & Design School

This April 1st marks the 100th anniversary of the founding of the Bauhaus, the German art school that, though short-lived, launched an entire design movement with a stark, functional aesthetic all its own. It can be tempting, looking into that aesthetic that finds the beauty in industry and the industry in beauty, to regard it as purely a product of its time and place, specifically a 20th-century Europe between the wars searching for ways to invent the future. But as revealed in Bauhaus World, this three-part documentary from German broadcaster Deutsche Welle, the legacy of the Bauhaus lives on not just in the reputations of its best known original members — Walter Gropius, Paul Klee, László Moholy-Nagy, and Josef Albers, among others — but in the currently active creators it continues to inspire in every corner of the Earth.

"What do escalators in Medellín, Arabic lettering in Amman, story-telling furniture from London, urban farming in Detroit and a co-living complex in Tokyo have to do with the Bauhaus?" asks Deutsche Welle's web site. They all draw from "the influence that the philosophy of the Bauhaus movement still exerts on the globalized society of the 21st century," a time that has its societal parallels with the year 1919.

To illustrate those parallels as well as the continuing relevance of Bauhaus teachings, "we meet architects, urban planners, designers and artists from around the globe who, in the spirit of the Bauhaus, want to rethink and change the world." True to its title, Bauhaus World's journey involves a wide variety of countries, and not just European ones: different segments profile the work of Bauhaus-influenced designers everywhere from Mexico to Jordan, Colombia to Israel, the United States to Japan.

It's in Japan, in fact, that the first part of Bauhaus World, "The Code," finds the outer reaches of the spread of Bauhaus that began with the exile of its members from Nazi Germany. The second part, "The Effect," deals with the enduring influence that has turned Bauhaus and its principles from a movement to a brand, one that has potentially done more than its share to make us as design-obsessed as we've become in the 21st century — a century that, the third and final part "The Utopia" considers, may or may not have a place for the original Bauhaus ideals. But whatever Gropius, Klee, Moholy-Nagy, Albers, and the rest would think of what the Bauhaus they created has become over the past hundred years, over the next hundred years more and more designers — emerging from a wider and wider variety of societies and traditions — will come to see themselves as its descendants.

Bauhaus World will be added to our list of Free Documentaries, a subset of our collection, 1,150 Free Movies Online: Great Classics, Indies, Noir, Westerns, etc..

Related Content:

How the Radical Buildings of the Bauhaus Revolutionized Architecture: A Short Introduction

An Oral History of the Bauhaus: Hear Rare Interviews (in English) with Walter Gropius, Ludwig Mies van der Rohe & More

32,000+ Bauhaus Art Objects Made Available Online by Harvard Museum Website

The Female Pioneers of the Bauhaus Art Movement: Discover Gertrud Arndt, Marianne Brandt, Anni Albers & Other Forgotten Innovators

Download Beautifully-Designed Bauhaus Books & Journals for Free: Gropius, Klee, Kandinsky, Moholy-Nagy & More

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

Watch Bauhaus World, a Free Documentary That Celebrates the 100th Anniversary of Germany’s Legendary Art, Architecture & Design School is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: “Myth and Reality” by Photographer Jeremy Jude Lee

Open Culture: How Obsessive Artists Colorize Old Photographs & Restore the True Colors of the Past

The art of hand-coloring or tinting black and white photographs has been around, the Vox video above explains, since the earliest days of photography itself. “But these didn’t end up looking super realistic,” at least not next to their modern counterparts, created with computers. Digital colorization “has made it possible for artists to reconstruct images with far more accuracy.”

Accuracy, you say? How is it possible to reconstruct color arrangements from the past when they have only been preserved in black and white? Well, this requires research. “You now have a wealth of information,” says Jordan Lloyd, a master digital colorist. “It’s just knowing where to look.”

Historical advertisements, diaries, documents, and the assessments of historians and ethnographers, among other resources, provide enough data for a realistic approximation. Some conjecture is involved, but when you see the amount of research that goes into determining the colors of the past, you will most surely be impressed.

This isn’t playing with filters and settings in Photoshop until the images look good—it’s using software to recreate what scholarship uncovers, the kind of digging that turns up important historical facts such as the original red-on-black logo of 7Up, or the fact that the Eiffel tower was painted a color called “Venetian red” during its construction.

Unless we know this color history, we might be inclined to think colorized photographs that get it right are wrong. However, the aim of modern colorizers is not only to make the past seem more immediate to us in the present; they also attempt to restore the colors people saw when photographs from the 19th and early 20th centuries were taken.

The software may not dictate color, but it still plays an indispensable role in how alive digitally colorized photographs appear. Colorizers first use it to remove blemishes, scratches, and the signs of age. Then they blend hundreds of layers of colors. It’s a little like making a digital oil painting. Human skin can have up to 20 layers of colors, ranging from pinks, to yellows, to blues.

Without “an intuitive understanding of how light works in the atmosphere,” however, these artists would fail to persuade us. Color is produced by light, as we know, and light is conditioned by levels of artificial and natural light blending in a space, by atmospheric conditions and time of day. Different surfaces reflect light differently. Correctly interpreting these conditions in a monochromatic photograph is the key to “achieving photorealism.”

Critics of colorization treat it like a form of vandalism, but as Lloyd points out, the process is not meant to substitute for the original artifacts, but to supplement them. The colorized photos we see in the video and at the links below are of images in the public domain, available to use and reuse for any purpose. Colorization artists have found their purpose in making the past seem far less like a distant country.

Related Content:

Russian History & Literature Come to Life in Wonderfully Colorized Portraits: See Photos of Tolstoy, Chekhov, the Romanovs & More

Colorized Photos Bring Walt Whitman, Charlie Chaplin, Helen Keller & Mark Twain Back to Life

The Opening of King Tut’s Tomb, Shown in Stunning Colorized Photos (1923-5)

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

How Obsessive Artists Colorize Old Photographs & Restore the True Colors of the Past is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

OCaml Weekly News: OCaml Weekly News, 19 Feb 2019

  1. ReasonableCoding: a live-stream showcasing Reason/OCaml
  2. Dune 1.7.0
  3. Previous discussions on namespaces?
  4. Other OCaml News

Tea Masters: 2 Puerh Best Buy

2003 private cake vs 1999 Menghai 7542
 I live in Taiwan and have never traveled to Yunnan. So it's probably a surprise to see that I have listed the Puerh category on the very top of all the teas I am selecting on www.tea-masters.com. There are many reasons for this choice. Puerh is the most concentrated and powerful tea in the world. It's one of the rare tea tree that survives on its own in the wild. It's also one of the rare teas that ages extremely well. And Taiwan was one of the first place that recognized the importance and quality of this amazing tea (in the late 1980s-early 1990s). At that time, China was still very poor and exported all the best teas it produced. The best places to find aged puerh have long been Hong Kong and Taiwan. That's why my selection focuses mostly on aged puerhs. These teas are amazing, but increasingly expensive. However, in my selection, there are 2 very good bargains I would like to (re)introduce:
2003 private cake vs 1999 Menghai 7542
1. Left: the 2003 small private factory raw puerh cake. 59.95 USD
In 2003, the puerh market wasn't regulated and the easiest way to sell puerh for new private factories was to use the same wrapping and packaging as in the past. On the neifei, it is mentioned Menghai Factory, but this isn't the case. From the slighthly lighter color of the buds, we can see that this cake is slightly younger as the one on the right.

2. Right: the 1999 Menghai 7542 raw puerh cake commissioned by the CNNP. 399 USD
This cake was commissioned by the CNNP and made by Menghai Tea Factory. That's why the neifei states CNNP. (During the communist/monopoly era, 1950-2000, the CNNP owned all the tea factories but there was no factory named CNNP). But there is no real doubt that the CNNP asked the cake expert, Menghai Tea Factory, to produce this cake. The quality of the leaves and several other details point in the same direction. But it's in part because it's not written Menghai that its price is actually very attractive compared to other 7542 cakes on the market. And also because I could purchase it in Taiwan in 2014 from someone owned the cakes for a long time.
2003 private cake vs 1999 Menghai 7542
1. The 2003 small private factory raw puerh cake.

This cake is easy to flake. It's also available in a sample size of 25 gr (6.95 USD). The main difference with the 1999 cake is the leaves. We see much fewer buds. And most of these leaves are plantation puerh. This isn't so obvious when you look at the cakes.

The best way to brew this puerh is with a normal to generous amount of leaves, an Yixing zisha teapot (not a zhuni), a slow pour and short brewing times.
The brew is dark brown, but not black. It has a good transparency and color. The scent is what's best with this puerh. It comes very close to an aged 8582. These darker and heavier seem older, because this puerh had a more humid storage than the 7542. But it's not a ripe, cooked puerh. For this, the brew isn't dark enough and the wet leaves are not cooked (see below the last pic of this tea).
For a detailed review of this tea, I recommend reading MattCha's blog post. Its woody scents and clean mouthfeel are a very good introduction to the world of aged puerh and a good choice for such an everyday tea.
The brew is getting a lighter hue at the end. And we can see that the leaves are not burned in the manner of cooked puerh. That's why this tea retains some coolness and freshness. In conclusion, this is the cheapest puerh of my selection and one of the Best Buy!

The other Best Buy puerh is the 1999 Menghai 7542 raw puerh cake commissioned by the CNNP.

This tea is also available in sample size so that you can taste it and see how you like it before purchasing a whole cake (29 USD for 20 gr).

The 7542 Menghai Factory cake is the successor of the Luyin, green mark puerh. Many say that the red mark is the best cake of that era, as it predates the green mark and is more expensive. However, according to Teaparker (who has often tasted both), the Luyin/green mark is superior. The green mark uses smaller leaves and more buds than the red mark and this adds finesse to its taste and aftertaste.

Also, thanks to the fact that the 7542 cake was pressed every year, it's possible to forecast its evolution in terms of aging and pricing. See how my early 1990s Luyin costs 1500 USD! In China, So, this 1999 cake has just turned 20 years old and is bound to see its price increase in a couple of months...
I'm using the same teapot, ivory porcelain cups, water and kettle for this tea, so that I can compare them. However, I'm not brewing this puerh exactly the same way. Thanks to its high concentration of aromas in the buds, I'm using fewer leaves and therefore I brew them a little bit longer.
The brew has a rich brown color and a complete lack of turbidity. It shines. And the result is delicious, because it combines the fresh energy of raw puerh, the woody and incense scents of aged puerh and the smooth refinement of aging. It coats the whole mouth in a clean and luscious way and it resonates for a long, long time. This takes the aged puerh to another level, but it's interesting how close both scents come. It's just that the youth and energy of this 20 years old puerh is much better preserved.
The explanation comes from the wet leaves. Below, we can see that the 1999 puerh (right) have a lighter color. This indicates a drier storage. (The other reason for the concentration was the higher proportion of buds). At slightly more than 1 USD per gram, a price often exceeded by good quality new puerhs, this 20 year old cake is really another Best Buy in my selection!
2003 private cake vs 1999 Menghai 7542
Note: if you're looking for a third puerh Best Buy, I recommend the 2003 wild Yiwu cake. This is a puerh I've selected at the start of my tea adventures, 15 years ago! I've stored it in my Taipei apartment ever since. At 495 USD per cake (500 gr), its price per gram is cheaper than the 1999 '7542' and cheaper than many new wild raw puerhs! And it has already been very well aged in Taiwan's climate!

Explosm.net: Comic for 2019.02.19

New Cyanide and Happiness Comic

Penny Arcade: News Post: Constituency

Tycho: EA’s premium digital offerings - EA Access on Xbox and Origin Access on PC - grant early play for their titles.  Except when it doesn’t work, and you got the same error message people were getting in the betas.  I once suggested that EA games came with “free misery,” nearly ten full years ago, and it’s a policy they’ve maintained with pride. The problem isn’t with subscription services or VIP access or whatever; it’s their bits, they can apportion them however they want.  And clubs are fun, I like clubs.  But it also has to…

Disquiet: 46 Seconds in Heaven

A new glimpse of an installation piece by the artist Zimoun is always a cause for attention. His work often achieves a mix — a contrast, more to the point — of sizable dimensions and aesthetic intimacy. This balance is thanks to his frequent combination of inexpensive materials and the lulling repetition of speedy mechanical activities. The effect, as witnessed here, is a robot lullaby at an industrial scale.

This work, a video document of which appeared in the past week, consists of “99 prepared dc-motors, felt balls, 297 m steel wire, 2018” (such is, in effect, the title of the work — a plainness that matches the materials). The result is a mix of fierce geometry and sympathetic droning, of rapid motion amid an otherwise static field.

The vertical lines are like grid-minded painter Agnes Martin paying tribute to Richard Lippold’s wire sculptures. The base is like the structure of one of Bruce Nauman’s fluorescent bulbs — which emit their own drone byproduct — repurposed as a support mechanism. The video lasts just 46 seconds, seen from various angles. It’s intriguing to consider whether the audio perfectly matches the image, or if it even matters, given the mechanical nature of the proceedings and the extremely narrow — imperceptible, likely — range of variation therein. And then you hit repeat.

Video originally posted at Zimoun’s Vimeo account. More from Zimoun at www.zimoun.net.

Planet Haskell: Holden Karau: PyData Hong Kong - Making the Big Data ecosystem work together with Python: Apache Arrow, Spark, Flink, Beam, and Dask @ PyData Hong Kong

Come join me on Tuesday 19 February @ 02:00 at PyData Hong Kong 2019 Hong Kong for PyData Hong Kong - Making the Big Data ecosystem work together with Python: Apache Arrow, Spark, Flink, Beam, and Dask.I'll update this post with the slides soon.Come see to the talk or comment bellow to join in the discussion :).Talk feedback is appreciated at http://bit.ly/holdenTalkFeedback

Jesse Moynihan: Forming 265

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Designer Spotlight: Max Salzborn

Colossal: Freediving Champion Guillaume Néry Swims Across Several of the World’s Oceans with One Breath

In the newest film by Guillaume Néry (previously), the world champion free diver swims across the world in one breath, or at least creative editing and camera tricks present the illusion of this great feat. One Breath Around the World follows Néry to the spectacular scenes he explores without a snorkel or air tank, like a variety of underwater caves or a pod of clustered whales. The film is shot by his wife Julie Gautier (previously) who was also free diving as she filmed Néry throughout France, Finland, Mexico, Japan, the Philippines, and other oceanic destinations. The film was created through the pair’s production company Les Films Engloutis. You can see more of their spectacular underwater films on Vimeo. (via My Modern Met)

LLVM Project Blog: EuroLLVM'19 developers' meeting program

The LLVM Foundation is excited to announce the program for the EuroLLVM'19 developers' meeting (April 8 - 9 in Brussels / Belgium) !

Keynote
Technical talks
Tutorials
Student Research Competition
Lightning talks
BoFs
Posters
If you are interested in any of this talks, you should register to attend the EuroLLVM'19. Tickets are limited !

More information about the EuroLLVM'19 is available here

Planet Haskell: Michael Snoyman: Shutting down haskell-lang.org

Early this week, I merged a commit which essentially shuts down the haskell-lang.org website. Besides a few rarely viewed pages without obvious replacements, visiting pages on https://haskell-lang.org will automatically redirect you to an appropriate page on https://haskell.fpcomplete.com. Also, consider this an announcement that there’s a new site around, https://haskell.fpcomplete.com!

The site is still being refined. However, to avoid confusion, someone requested that I make an announcement now. That request makes sense. In this post, I want to explain:

  • Why I shut down haskell-lang.org
  • What the goal of haskell.fpcomplete.com is
  • What makes this new site different from haskell-lang.org
  • Plans for the future

Onward!

Shutting down haskell-lang.org

I’ve been playing with the idea of shutting down haskell-lang.org for a while now. I didn’t want to simply turn it off, since there’s a lot of useful content there that I regularly use myself and point others to for training (we’ll get to that in a bit). It wasn’t a huge amount of work to put together the replacement site, but it did take some dedicated writing time. I got that time last week and did most of the work then.

I first publicly proposed shutting down the site in December of last year. I put together a proposal to get a bunch of people together to work on a new Commercial Haskell site. There wasn’t much interest in such collaboration, so I went with option B, which I’ll explain in the next section.

As for why haskell-lang.org should be shut down, I’ll quote the history section of the new site’s README:

This website replaces some previous false starts (and mistakes) at trying to create an opinionated hub of Haskell information. Previously, some FP Complete members participated in haskell-lang.org as another vision of an opinionated Haskell site. Later, we sought out collaborators for creating a more complete Commercial Haskell website.

haskell-lang.org was a mistake, and there was insufficient interest in commercialhaskell.com. Given that the branding for haskell-lang.org was incorrect, there was little interest from others in the general concept of creating an opinionated site, and we at FP Complete were still passionate about this topic, we decided to focus efforts on a site under our own branding and control.

This site is unapologetically opinionated, and follows what we have found to be the best route towards getting productive with Haskell quickly.

The need for a site

haskell-lang.org has been collecting solid tutorials on general Haskell techniques and specific libraries. The collection is opinionated, so that I can use it for training courses I give, and point new users to it without needing to give any caveats about which approach to follow. And the site is backed by a Git repository using Markdown files, which I’m a huge fan of.

So first goal of this site would be: retain the technical content and educational approach provided by haskell-lang.org, without the bad history that goes along with that name.

Other goals

At FP Complete, we’ve done a few surveys of the Haskell community to get an idea of what we can do to most help more companies adopt Haskell. From our last survey, it seems that more educational content is top of the list, followed by helping companies generally feel comfortable adopting Haskell. I covered the education aspect above, and we’ll continue to put efforts into improving that situation.

On the more nebulous adoption point, we’re adding two more goals to the new site:

  • Provide promotion material: content that explains what makes Haskell a good choice for businesses, together with some potential drawbacks people should be aware of.
  • Introduce a Success Program at FP Complete, providing affordable commercial Haskell support including training and mentoring. We believe this may help more companies adopt Haskell.

Enrolling in the Success Program is a paid service at FP Complete (though we are pricing it as low as we can, to maximize adoption without actually losing money). We’re hoping that the presence of a clearly defined and publicly priced commercial support package will help reduce perceived risk with Haskell further and allow more adoption.

The future

The new site is still a work in progress. Overall, the styling still needs a bit of work, I want to refine the language some (and likely scale back the prose). I also want to refresh a bunch of the technical content to be in line with our current recommendations. This will also affect the Applied Haskell training I’ll be giving later this year. (Feel free to check out the current version of the course online.)

I still have some questions up in the air, like whether we’ll host a blog on this site or simply continue all FP Complete blogging activitiy on our corporate site. I’ve started putting together a bit of a philosophy page explaining the FP Complete approach to commercial Haskell development, and that needs expanding. And I’d like to get a more content on the contribute page to allow people to find a project they can cut their teeth on.

I hope that this new site not only allows for creation of and access to great Haskell content. I also hope that this is taken as a positive message from the rest of the community and an indication of how we at FP Complete, and I personally, intend to interact with the broader Haskell community going forward. We’ll remain opinionated on our technical recommendations, as I believe we should be. But hopefully this change in naming and purpose of the site remove any adversarial nature to sharing these opinions.

new shelton wet/dry: ‘Ne nous prenons pas au sérieux, il n’y aura aucun survivant.’ –Alphonse Allais

Contrary to the belief that happiness is hard to explain, or that it depends on having great wealth, researchers have identified the core factors in a happy life. The primary components are number of friends, closeness of friends, closeness of family, and relationships with co-workers and neighbors. Together these features explain about 70 percent of [...]

Penny Arcade: Comic: Constituency

New Comic: Constituency

new shelton wet/dry: I’ll take a rusty nail, and scratch your initials in my arm

Leveraging popular social networking sites, individuals undertake certain forms of behavior to attract as many likes and followers as they can. One platform that symbolizes people’s love for strategic self-presentation to the utmost degree is Instagram. […] Narcissism is characterized by grandiose exhibition of one’s beauty and pursuit of others’ admiration. Posting selfies/groupies is associated with [...]

Colossal: Freshly Cut Flowers Make Sparks in Electrically Charged Images by Hu Weiyi

Image credits: Hu Weiyi and A+ Contemporary

Image credits: Hu Weiyi and A+ Contemporary

In The Tentacles project, by Chinese artist Hu Weiyi, bright sparks and fiery electrical waves flow through a series of freshly blossomed flowers against matte gray backgrounds. To produce the images, Hu uses high-voltage capacitors to create electrical currents that run through the pink and maroon roses, showcasing the power of electricity in all its beauty and danger.

The photo series was inspired by a previous project Hu created in 2014, called Flirt, which introduced cold light to various objects to manipulate viewers’ perception without using digital software. “I then began to study the high-voltage arc and made a high-voltage capacitor which can instantaneously penetrate through the air,” says Hu. “The principle is similar to that of the electric baton, but much stronger.”

The research behind The Tentacles took Hu over a year. He worked with various technicians to try different types of electric discharge devices that would exert the right amount of electrical flow to be captured by his camera. In this experimental phase, Hu used dozens of roses and took hundreds of photographs before finding the right images and settings for his final collection. “My studio is therefore filled with the unpleasant smell of rotten flowers, just like a morgue,” says the artist.

Hu’s work illustrates the aesthetic beauty and diversity of physical forms; the softness and stillness of the spongy rose petals in comparison to the dangerous allure of the electrical spark. “The moment of discharge is wonderful and sexy, but it can also be a cold-blooded tool for torture and execution,” he explains. Hu’s combination of materials illustrate the impermanence of natural plant matter, much like the fragile nature of the human body when exposed to lightning. “The flowers in full bloom remind me of my own fragility and powerlessness,” says Hu.

In comparison to manipulating photographs with software such as Photoshop, the time, precision and research in Hu’s work gives the subjects in his images more weight, their electricity more tangible. You can see more of Hu’s photographs on A+ Contemporary’s website.

new shelton wet/dry: This is very surprising and it is a really bad news for CoCos, specially for those that have low coupon for the first call

Revising things makes people think they are better, absent objective improvement. We refer to this phenomenon as the revision bias. […] We propose that the fact that revisions typically are intended to be improvements over their originals gives rise to an overgeneralized heuristic that revisions necessarily are improvements over their originals. Yet, as any author responding [...]

Planet Haskell: Monday Morning Haskell: Upgrading My Development Setup!

code_setup.jpg

In the last year or so, I haven't actually written as much Haskell as I'd like to. There are a few reasons for this. First, I switched to a non-Haskell job, meaning I was now using my Windows laptop for all my Haskell coding. Even on my previous work laptop (a Mac), things weren't easy. Haskell doesn't have a real IDE, like IntelliJ for Java or XCode for iOS development.

Besides not having an IDE, Windows presents extra pains compared to the more developer-friendly Mac. And it finally dawned on me. If I, as an experienced developer, was having this much friction, it must be a nightmare for beginners. Many people must be trying to both learn the language AND fight against their dev setup. So I decided to take some time to improve how I program so that it'll be easier for me to actually do it.

I wanted good general functionality, but also some Haskell-specific functions. I did a decent amount of research and settled on Atom as my editor of choice. In this article, we'll explore the basics of setting up Atom, what it offers, and the Haskell tools we can use within it. If you're just starting out with Haskell, I hope you can take these tips to make your Haskell Journey easier.

Note that many tips in this article won't work without the Haskell platform! To start with Haskell, download our Beginners Checklist, or read our Liftoff Series!

Goals

It's always good to begin with the end in mind. So before we start out, let's establish some goals for our development environment. A lot of these are basic items we should have regardless of what language we're using.

  1. Autocomplete. Must have for terms within the file. Nice to have for extra library functions and types.
  2. Syntax highlighting.
  3. Should be able to display at least two code files side-by-side, should also be able to open extra files in tabs.
  4. Basic file actions should only need the keyboard. These include opening new files to new tabs or splitting the window and opening a new file in the pane.
  5. Should be able to build code using the keyboard only. Should be able to examine terminal output and code at the same time.
  6. Should be able to format code automatically (using, for instance, Hindent)
  7. Some amount of help filling in library functions and basic types. Should be able to coordinate types from other files.
  8. Partial compilation. If I make an obvious mistake, the IDE should let me know immediately.
  9. Vim keybindings (depends on your preference of course)

With these goals in mind, let's go ahead and see how Atom can help us.

Basics of Atom

Luckily, the installation process for Atom is pretty painless. Using the Windows installer comes off without a hitch for me. Out of the box, Atom fulfills most of the basic requirements we'd have for an IDE. In fact, we get all our 1-4 goals without putting in any effort. The trick is that we have to learn a few keybindings. The following are what you'll need to open files.

  1. Ctrl+P - Open a new tab with a file using fuzzy find
  2. Ctrl+K + Direction (left/right/up/down arrow) - Open a new pane (will initially have the same file as before).
  3. Ctrl+K + Ctrl+Direction - Switch pane focus

Those commands solve requirements 3 and 4 from our goals list.

Another awesome thing about Atom is the extensive network of easy-to-install plugins. We'll look at some Haskell specific items below. But to start, we can use the package manager to install vim-mode-improved. This allows most Vim keybindings, fulfilling requirement 9 from above. There are a few things to re-learn with different keystrokes, but it works all right.

Adding Our Own Keybindings

Since Atom is so Hackable, you can also add your own keybindings and change ones you don't like. We'll do one simple example here, but you can also check out the documentation for some more ideas. One thing we'll need for goal #5 is to make it easier to bring up the bottom panel within atom. This is where terminal output goes when we run a command. You'll first want to open up keymap.cson, which you can do by going to the file menu and click Keymap….

Then you can add the following lines at the bottom:

'atom-workspace':
  'ctrl-shift-down': 'window:toggle-bottom-dock'
  'ctrl-shift-up': 'window:toggle-bottom-dock'

First, we scope the command to the entire atom workspace. (We'll see an example below of a command with a more limited scope). Then we assign the Ctrl+Shift+Down Arrow key combination to toggle the bottom dock. Since it's a toggle command, we could repeat the command to move it both up and down. But this isn't very intuitive, so we add the second line so that we can also use the up arrow to bring it up.

A super helpful tool is the key binding resolver. At any point, you can use ctrl+. (control key plus the period key) to bring up the resolver. Then pressing any key combination will bring up the commands Atom will run for it. It will highlight the one it will pick in case of conflicts. This is great for finding unassigned key combinations!

Haskell Mode in Atom

Now let's start looking at adding some Haskell functionality to our editor. We'll start by installing a few different Haskell-related packages in Atom. You don't need all these, but this is a list of the core packages suggested in the Atom documentation.

language-haskell
ide-haskell
ide-haskell-cabal
haskell-ghc-mod
autocomplete-haskel

The trickier part of getting Haskell functionality is the binary dependencies. A couple of the packages we added depend on having a couple programs installed. The most prominent of these is ghc-mod, which exposes some functionality of GHC. You'll also want a code formatter, such as hindent, or stylish-haskell installed.

At the most basic level, it's easy to install these programs with Stack. You can run the command:

stack install ghc-mod stylish-haskell

However, ghc-mod matches up with a specific version of GHC. The command above installs the binaries at a system-wide level. This means you can only have the version for one GHC version installed at a time. So imagine you have one project using GHC 8.0, and another project using GHC 8.2. You won't be able to get Haskell features for each one at the same time using this approach. You would need to re-install the proper version whenever you switched projects.

As a note, there are a couple ways to ensure you know what version you've installed. First, you can run the stack install ghc-mod command from within the particular project directory. This will use that project's LTS to resolve which version you need. You can also modify the install command like so:

stack --resolver lts-9 install ghc-mod

There is an approach where you can install different, compiler specific versions of the binary on your system, and have Atom pick the correct one. I haven't been able to make this approach work yet. But you can read about that approach on Alexis King's blog post here.

Keybinding for Builds

Once we have that working, we'll have met most of our feature goals. We'll have partial compilation and some Haskell specific autocompletion. There are other packages, such as haskell-hoogle that you can install for even more features.

There's one more feature we want though, which is to be able to build our project from the keyboard. When we installed our Haskell packages, Atom added a "Haskell IDE" menu at the top. We can use this to build our project with "Haskell IDE" -> "Builder" -> "Build Project". We can add a keybinding for this command like so.

'atom-text-editor[data-grammer~/"haskell"]':
  ...
  'ctrl-alt-shift-b': 'ide-haskell-cabal:build'

Notice that we added a namespace here, so this command will only run on Haskell files. Now we can build our project at any time with Ctrl+Shift+Alt+B, which will really streamline our development!

Weaknesses

The biggest weakness with Atom Haskell-mode is binary dependencies and GHC versions. The idea behind Stack is that switching to a different project with a different compiler shouldn't be hard. But there are a lot of hoops to jump through to get editor support. To be fair though, these problems are not exclusive to Atom.

Another weakness is that the Haskell plugins for Atom currently only support up through LTS 9 (GHC 8). This is a big weakness if you're looking to use new features from the cutting edge of GHC development. So Atom Haskell-mode might not be fully-featured for industry projects or experimental work.

As a further note, the Vim mode in Atom doesn't give all the keybindings you might expect from Vim. For example, I could no longer use the colon key plus a number to jump to a line. Of course, Atom has its own bindings for these things. But it takes a little while to re-learn the muscle memory.

Alternatives

There are, of course, alternatives to the approach I've laid out in this article. Many plugins/packages exist enabling you to get good Haskell features with Emacs and Vim. For Emacs, you should look at haskell-mode. For Vim, I made the most progress following this article from Stephen Diehl. I'll say for my part that I haven't tried the Emacs approach, and ran into problems a couple times with Vim. But with enough time and effort, you can get them to work!

If you use Visual Studio, there are a couple packages for Haskell: Haskelly and Haskero. I haven't used either of these, but they both seem provide a lot of nice features.

Conclusion

Having a good development environment is one of the keys to programming success. More popular languages have full-featured IDE's that make programming a lot easier. Haskell doesn't have this level of support. But there's enough community help that you can use a hackable editor like Atom to get most of what you want. Since I fixed this glaring weakness, I've been able to write Haskell much more efficiently. If you're starting out with the language, this can make or break your experience! So it's worth investing at least a little bit of time and effort to ensure you've got a smooth system to work with.

Of course, having an editor setup for Haskell is meaningless if you've never used the language! Download our Beginners Checklist or read our Liftoff Series to get going!

new shelton wet/dry: I’ll show you how to sneak up on the roof of the drugstore

[I]t is getting harder to target gamers via traditional advertising techniques, because an increasing number of consumers spend more of their digital days behind paywalls, where there is often no advertising. These are also typically the most engaged and most-spending audiences. To win some of the attention back, games companies must target gamers behind paywalls, [...]

Planet Haskell: Sandy Maguire: Freer Monads: Too Fast, Too Free

The venerable Lyxia had this to say about my last post on freer monads:

I agree the performance argument is way less important than the frequency at which it's thrown around makes it seem. The reason freer performance sucks is that you're repeatedly constructing and deconstructing trees at runtime. However, that is only a consequence of the implementation of freer as a GADT (initial encoding). I bet the final encoding can do wonders:

newtype Freer f a = Freer (forall m. Monad m => (forall t. f t -> m t) -> m a)

I spent a few days working through the implications of this, and it turns out to be a particularly compelling insight. Behold the microbenchmarks between freer-simple and an equivalent program written against mtl:

benchmarking freer-simple
time                 745.6 μs   (741.9 μs .. 749.4 μs)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 745.1 μs   (742.2 μs .. 748.5 μs)
std dev              10.68 μs   (8.167 μs .. 14.23 μs)

benchmarking mtl
time                 10.96 μs   (10.93 μs .. 10.98 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 10.95 μs   (10.92 μs .. 10.99 μs)
std dev              119.3 ns   (93.42 ns .. 153.7 ns)

Not so good; freer-simple is like 75x worse in this case! But the same program again when written in this final encoding is pretty damn fast:

benchmarking too-fast-too-free
time                 24.23 μs   (24.10 μs .. 24.37 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 24.27 μs   (24.15 μs .. 24.40 μs)
std dev              448.8 ns   (355.8 ns .. 586.1 ns)

It's roughly 2x slower than mtl, which is AKA 35x faster than freer-simple. This is pretty sweet, and it comes with the benefit of getting to keep the underlying semantics of freer-simple.

So without further ado, I'd like to share my work-in-progress with you, tentatively named too-fast-too-free. This is ready for prime-time, but I'd prefer to merge it to someone upstream rather than pollute hackage with yet another free(r) monad extensible effects package.

I'll do it if I have to, but the performance is fair game for anyone who wants it. If I don't hear from anyone by next week, I'll publish a new package to hackage and begin the freer monad revolution we've all been waiting for.

What the Heck Is Any of this Stuff Anyway?

Let's investigate this finally-encoded type and see where this performance comes from:

newtype Freer f a = Freer
  { runFreer :: forall m. Monad m => (forall t. f t -> m t) -> m a
  }

The type of runFreer is saying "if you give me a Freer f a and a natural transformation from f to some monad m, then I can give you back an m a." Sounds promising, right?

Freer's instance for Monad is written in terms of this final m, and so short of shunting around some functions, we're not really paying any cost for binds compared to just writing in terms of m:

instance Monad (Freer f) where
  Freer ma >>= f = Freer $ \k -> do
    a <- ma k
    runFreer (f a) k

Compare this to the approach used by freer-simple which needs to allocate leafs in a tree for every bind (and for every fmap and ap!) That's a huge win already.

Turning Freer into Eff uses the same trick as freer-simple---let Eff r be Freer (Union r), where Union r is a value that can be any effect in r. A natural transformation forall m. Monad m => (forall t. Union r t -> m t) must therefore handle every possible effect in r, and so we haven't lost any capabilities with our new encoding.

The challenging part was figuring out how to plumb state through the encoding of Freer f a---after all, many interesting interpreters are necessarily stateful.

Fortunately there's a trick. Because Eff (e ': r) a can be interpreted in terms of any Monad m, we can choose m ~ StateT s (Eff r), and get our statefulness from StateT directly. Because StateT's bind is written in terms of its underlying monad, this trick doesn't cost us anything more than shunting another few functions around.

We can achieve short-circuiting interpreters similarly by evaluating them via ExceptT (Eff r). In fact, this pattern turns out to be pretty common---and it generalizes thusly:

transform
    :: ( MonadTrans t
       , MFunctor t
       , forall m. Monad m => Monad (t m)
       )
    => (forall m. Monad m => t m a -> m b)
       -- ^ The strategy for getting out of the monad transformer.
    -> (eff ~> t (Eff r))
    -> Eff (eff ': r) a
    -> Eff r b
transform lower f (Freer m) = Freer $ \k -> lower $ m $ \u ->
  case decomp u of
    Left  x -> lift $ k x
    Right y -> hoist (usingFreer k) $ f y

Admittedly the type is a little terrifying, but library code can specialize it down to less offensive combinators.

At the end of the day, this final encoding means that Freer code specializes down to its eventual result anyway, giving us the "fusion" of fused-effects without the boilerplate.

Hopefully these results are enough to finally put the "free monads have bad performance" argument to rest. I'll have some promising results on the bracket front as well that require some refinement, but hopefully those will come sooner than later.

¡Viva la freer monad revolucion!

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: “Delete This Number” by Photographer Mary Morgan

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Dario Maglionico

Disquiet: The Ambient Craft of Spy Dramas

The clock is nearing 11pm on a Saturday, and another episode of Berlin Station cycles up, following immediately on the end of the previous one. Season three of the TV series, a spy drama, is underway, set in a brightly lit, contemporary Estonia under a creeping, old-school Russian curtain — or at least so things seem. It’s still early on.

There has been a break in the week’s heavy wind and rain this evening, and the house is especially quiet, little to any sound inside or out. The show’s own audio, as a result, is all the more present in the living room. The actors’ voices are hushed, anxious. The stereo spectrum of European café scenes brings the steam of espresso machines within reach. The echoes of hospital hallways on the screen are on loan to the dimly lit room in which the TV hangs.

Spy dramas, like horror movies and romances, are filled with extended sequences containing little to no dialog. Unlike the other two, spies often never reach a climax. At their best, such thrillers are often all suspense — which is to say, all suspension: not action, but the holding off of action.

Striving honorably for a proper audience to get it to another season, Berlin Station has its share of fisticuffs and explosions, but it also has a lot of attenuation — the mapping of questionable territory, enough skulking to max out a Fitbit, detailed surveillance maneuvers (the above image is from the show’s opening credits: ears are everywhere). The thing built up to by the ever-heightening if still deeply sublimated drama might simply be a piece of paper folded and handed from one palm to another surreptitiously, or a fragment of chalk being used to mark a wall in an innocuous neighborhood, or a file disappearing suddenly and almost imperceptibly on a network.

And such sequences are where the show’s composer — Reinhold Heil, who first came to prominence with Run Lola Run — is at his best. There’s only one collection of Berlin Station music so far, from the first season, released back in 2016. Early on in it is a track titled “Dirty Laundry,” segments of synths and strings and muffled percussion that suggest something big is about to happen — and it may, but it never sounds big. (Bizarrely for 2019, this track doesn’t appear to be on YouTube, which is why I haven’t embedded it here, but it is on all the major streaming services.)

This is all a roundabout explanation — following my own recent realization — why I so enjoy watching spy dramas and related thrillers, and why I so enjoy listening to their scores. It’s why as much as I love Cliff Martinez’s music for Solaris, it’s his The Company of You Keep and Arbitrage scores that I listen to most often. The sublimated intensity that these narratives call for in their soundtracks is exactly what can make for excellent ambient music. The genre is often conflated with new age or spiritual or relaxing, but it can be very tense, as well. And like a good spy, Berlin Station‘s Heil does his best work in the shadows.

Explosm.net: Comic for 2019.02.18

New Cyanide and Happiness Comic

Jesse Moynihan: Ronin Diaries

If y’all have been closely reading my posts over the past 3 years, you probably know that after I quit Adventure Time, I got pretty heavy into a midlife crisis/jiu jitsu obsession. I was biking down Sunset Blvd in August of 2015, when I saw two dudes grappling in the window of a boxing gym. […]

Stok Footage: Almost a Year Since the Most Recent Post

It’s odd how time passes, and how my take on what’s worthy of a blog post has changed.

These days many of my urges to quip are satisfied by Twitter, and summoning the energy to structure and edit a long-ish form post seems more bother than it’s worth.

I wonder what value maintaining a web presence has to me.

Colossal: Intricate Metal Root Sculptures by Sun-Hyuk Kim Take Human Form

South Korean artist Sun-Hyuk Kim (previously) cuts, welds, melts, and curves pipes and wires into structures that are part human anatomy and part twisted plant root systems. The branch-like metal blood vessels create the outline of limbs, abdomens, and heads, as well as the trees that appear to have sprouted from them. Made entirely of stainless steel, the sculptures are meant to signify our imperfect and incomplete existence in relation to the natural world.

“My art is a tool to discover the truth and remind myself [and] viewers through various media,” Sun-Hyuk told Colossal. From large head-shaped root sculptures connected at the nose, to full body works with large trunks protruding from the head, back, and torso, the sculptures are often dramatic depictions of the human experience and what the artist considers truth.

New sculptures and drawings will be shown at Sun-Hyuk’s upcoming solo show at the Suhadam Art Space in South Korea from June 7 through August 5, 2019. To see more of his current and future works, you can also follow the artist on Instagram. (via Ignant)

Explosm.net: Comic for 2019.02.17

New Cyanide and Happiness Comic

new shelton wet/dry: First, the meditator explains why he will doubt. Second, he gives an account of the way he will go about doubting. Third, he engages in the activity of doubting. Fourth and finally, he reflects on the power of habitual opinions and their tendency to resist doubt.

GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. […] GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy [...]

Daniel Lemire's blog: Science and Technology links (February 16th, 2019)

    1. In their new book Empty Planet, Bricker and Ibbitson argue that within the next 30 years, Earth’s population will start to rapidly decline. They believe that official population predictions overestimate future populations because they fail to fully take into account accelerating cultural changes.
    2. It is believed that senescent cells are a major driver of age-related conditions. Senescent cells often occur when cells are the results of too many divisions (beyond the Hayflick limit). Our hearts age, but their cells do not divide very much. That is a problem because our hearts have a limited ability to repair themselves (by creating new cells) but this should protect our hearts from senescent cells… Yet Anderson et al. found that there are senescent cells in the heart: basically cells can become senescent due to damage. What is more exciting is that they found that by clearing these cells in old mice, they could effectively rejuvenate their hearts. Furthermore, there is a growing number of therapies for removing senescent cells. Furthermore, there are ongoing (early) clinical trials to measure the effect of removing senescent cells in human beings. Initial results are encouraging:

      The doctors found that nine doses of the two pills over three weeks did seem to improve patients’ ability to walk a bit farther in the same amount of time, and several other measures of well-being.

      More trials will start this year.

    3. Goldacre et al. looked at how well the most prestigious journals handle the agreed upon set of standards for reporting scientific trials:

      All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy.

      (Source: A. Badia)

    4. A new drug appears to reverse age-related memory loss, in mice.

Explosm.net: Comic for 2019.02.16

New Cyanide and Happiness Comic

CreativeApplications.Net: Open Highway – Surfacing the hidden layers of the city

Open Highway – Surfacing the hidden layers of the city
Created by RNDR, Open Highway is a real-time light installation visualising highway vehicles at a scale of 1:1 from the Leidsche Rijn tunnel over the A2 highway in Utrecht.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Debora Cheyenne Cruchon

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon

 

 

Debora Cheyenne Cruchon’s Website

Debora Cheyenne Cruchon on Instagram

Perlsphere: Maintaining Perl 5 (Tony Cook): January 2018 Grant Report

This is a monthly report by Tony Cook on his grant under Perl 5 Core Maintenance Fund. We thank the TPF sponsors to make this grant possible.



Approximately 54 tickets were reviewed, and 11 patches were
applied

[Hours]         [Activity]
  8.88          #108276 review
                #108276 check over committed changes, look to re-work, ask
                list about PERL_OP_PARENT
                #108276 cleanup PERL_OP_PARENT detritus
                #108276 review old patches, re-work, testing
                #108276 more testing, work on optimize_op()
                #108276 research, cleanup, more testing. Comment with
                patch
  0.58          #121033 research, comment
  1.98          #123724 rebase, retesting, bump versions, apply to blead,
                make public
  0.05          #126191 check for activity and close
  2.69          #127521 review code, POSIX, work on implementation
                #127521 local commit with notes
  1.72          #130367 rebase, write new tests, testing, apply to blead
                #130367 perldelta
  1.90          #130585 debugging
                #130585 debugging
  0.32          #131506 research and comment
                #131506 close
  0.07          #131931 link to stack-not-refcounted meta ticket
  0.08          #131955 check for activity and close
  0.52          #132158 rebase, retest and apply to blead
  1.35          #132338 research, comment
  0.12          #132583 same and add the form sub-parse fix to backports
                (the other is in 5.28 already)
  3.76          #132777 prep, review code, research, work on some docs
                #132777 testing, more docs, comment with patch
  0.27          #133030 re-check, comment
  0.27          #133153 comment
  0.62          #133504 research and comment
  0.55          #133522 review patch, testing, apply to blead
  1.88          #133524 work up a fix and apply to blead
  3.88          #133575 prep and start gcc 8.2.0 build
                #133575 work up a patch, testing, comment with patch
                #133575 consider comment changes, research, testing, apply
                to blead
  0.60          #133590 try to work with metaconfig, report upstream to
                the metaconfig wizards
  0.77          #133721 review, write test, testing, apply test and fix to
                blead
  0.13          #133740 review new patch and comment
  0.13          #133744 review and comment
  0.10          #133746 review and close
  0.10          #133750 review and reject
  1.00          #133751 comment with patch
                #133751 retest, apply to blead
  0.55          #133752 follow up
                #133752 got a response privately due to rt.perl.org
                issues, forwarded request to perlbug-admin and closed
                tickets
  0.25          #133753 (sec) fix subject, comment
                #133753 (sec) testing, comment
  1.82          #133754 test setup, testing, research
  2.08          #133755 research, produce a patch and comment
                #133755 fix a typo, test, apply to blead
                #133755 comment and close
  0.25          #133756 review and comment briefly
  0.61          #133760 research and comment
                #133760 research (read the Configure source) and comment
  0.72          #133765 review, start prep to test, review code, comment
                #133765 research, add commit to votes file
  0.53          #133771 review patch and comment
                #133771 review new patch and comment
  0.97          #133776 review and comment
                #133776 review and comment
  3.32          #133782 diagnose, work on a fix, testing
                #133782 retest patch against blead, apply to blead
                #133782 re-test, fix ticket number in patch, apply to
                blead
  0.38          #133787 research (more Configure) and comment
  0.33          #133788 research history of APIs, remove M flag and apply
                to blead
  1.07          #133789 debugging, comment
  0.25          #133806 bump $IO::VERSION, testing
  2.35          add t/ file leak checks and TMPDIR leak checks
  0.48          look for possibly fixed sub-parse tokenizer bugs, find a
                couple and comment, bookmark for later closing
  0.72          look for some more possibly fixed tokenizer bugs
  2.82          look into detecting shmem leaks reported by Tux,
                https://github.com/Test-More/test-more/issues/823
  0.17          review maint votes, remove one already picked
  1.07          utf8-readline: build fixes, debugging
  1.50          utf8-readline: clean up, consider re-working options
  0.62          utf8-readline: debug failing tests
  2.33          utf8-readline: debugging
  2.52          utf8-readline: debugging, fixes for buffer parsing tests
  0.32          utf8-readline: more debugging failing tests
  0.10          utf8-readline: rebase, review
  2.80          utf8-readline: track down a fix warning handling
======
 65.25 hours total

Explosm.net: Comic for 2019.02.15

New Cyanide and Happiness Comic

Quiet Earth: SXSW 2019: First Look at GIRL ON THE THIRD FLOOR [Teaser]

Travis Stevens is best known as a producer of such memorable projects as Cheap Thrills, Jodorowsky's Dune, Starry Eyes, XX and Mohawk among many others but with the upcoming Girl on the Third Floor, he's adding writer and director to his resume.


This isn't exactly The Money Pit but that comedy is the first thing that came to mind when I saw the write-up for Stevens' debut which stars former WWE star Phil Brooks as Don Koch, the recent buyer of a dilapidated Victorian house. Don convinces his wife Liz (Trieste Kelly) that he can manage the renovations himself but he soon realizes that he's in over his head - and it probably doesn't help matters any that the house appears to have a sordid past and may be haunted.


The fir [Continued ...]

Disquiet: Aphex Twin (in Japan)

I do believe this may be the cover of the upcoming Japanese translation of my 33 1/3 book on Aphex Twin’s landmark album Selected Ambient Works Volume II. Having spent much of the early 2000s working in manga, which is to say helping shepherd the translation into English of Japanese books, I’d say it’s nice to finally be sending a book back in the opposite direction.

Disquiet: Disquiet Junto Project 0372: Honeymoon Phase

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Tracks will be added to the playlist for the duration of the project.

Deadline: This project’s deadline is Monday, February 18, 2019, at 11:59pm (that is, just before midnight) wherever you are on. It was posted shortly after noon, California time, on Thursday, February 14, 2019.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0372: Honeymoon Phase
The Assignment: Record a piece of music with (only) your most recently obtained instrument or music/sound tool.

Step 1: Locate the latest instrument, piece of music/sound software, or related technology that has come into your possession. (If there’s something inexpensive, like an app, you’ve been meaning to try out, this project might provide an impetus to do so.)

Step 2: Employ only the single thing identified in Step 1 to compose and record a short track.

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0372” (no spaces or quotation marks) in the name of your track.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0372” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation a project playlist.

Step 3: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 4: Post your track in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0372-honeymoon-phase/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details:

Deadline: This project’s deadline is Monday, February 18, 2019, at 11:59pm (that is, just before midnight) wherever you are on. It was posted shortly after noon, California time, on Thursday, February 14, 2019.

Length: The length is up to you. Short is good.

Title/Tag: When posting your track, please include “disquiet0372” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: Please for this project be sure to set your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 372nd weekly Disquiet Junto project — Honeymoon Phase / The Assignment: Record a piece of music with (only) your most recently obtained instrument or music/sound tool — at:

https://disquiet.com/0372/

More on the Disquiet Junto at:

https://disquiet.com/junto/

Subscribe to project announcements here:

http://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0372-honeymoon-phase/

There’s also on a Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

Image associated with this project adapted thanks to a Creative Commons license from a photo by Thorsten Sideb0ard:

https://www.flickr.com/photos/sideb0ard/10364491865/

https://creativecommons.org/licenses/by-nc-sa/2.0/

Perlsphere: The Perl Conference in Pittsburgh - Call For Presentations

Sure, there’s more than one way to do it, but yours is the best, right? Seize this opportunity to prove it by submitting a talk for The Perl Conference in Pittsburgh! Submit your presentation ideas by March 1. If it’s accepted you’ll attend the conference as a guest speaker for free and get a free room upgrade just for submitting your proposal!

We are looking for all level of speakers, first-timers are welcome! The audience will range in levels of expertise and so all level of talks are welcome - your advanced talk is sure to be a hit, just like any “Getting started with” presentations geared toward beginners. We know the Perl community can benefit from what you have to say! Find out more and submit your idea here.

Not sure you want to give a presentation? We understand. Consider at least attending a few of them. The main conference is Jun 17-19 with master class tutorial sessions surrounding it on Jun 16, 20, and 21. Register to attend The Perl Conference in Pittsburgh today, and you’ll receive the early bird discount!

Cover image Wikimedia licensed under Creative Commons

Quiet Earth: Must Watch Trailer for "Cosmic" Apocalyptic Horror Film STARFISH

Wow, this flick looks great with shades of The Quiet Earth and Stephen King's more cosmic horror stories.

Yellow Veil Pictures and 1091 Media's The Orchard have released the first trailer for upcoming cosmic horror Starfish, featuring a dazzling lead performance from Virginia Gardner (Halloween 2018, Marvel’s The Runaways).

The film begins it’s theatrical roadshow tour starting in NYC on March 13 with a rollout in other cities through late April, followed by the Digital/VOD release May 28. Full list of theatrical dates can be found below.

Gardner stars as Aubrey, a young woman suffering from the death of a close friend. When a mysterious signal from an unknown dimension summons the end of days, its appears as if only Aubrey is left on earth. Trapped in the apar [Continued ...]

Quiet Earth: Fantasia Hit THE UNSEEN Hits DVD & Digital this February 26th!

The Unseen is an upcoming invisible man film from director Geoff Redknap (Cabin in the Woods). The film had its World Premiere at Montreal’s Fantasia Film Festival this past Summer and now, it is set to show on DVD and Digital, through United States’ film distributor Monarch Home Entertainment this February 26th.

The Unseen stars Aden Young (“Rectify") as Bob Langmore. He has a strange condition, in which his body is slowly disappearing. Dissolving away, Bob reaches out to his family with time running out. However, his former wife, Darlene (Camille Sullivan) tells him that Eva (Julia Sarah Stone) is missing, leading to Bob’s desperate search for a reunion.


Synopsis:
A man, who years earlier mysteriously abandoned his family and isola [Continued ...]

Daniel Lemire's blog: My iPad Pro experiment: almost two years later

Soon after the first iPad Pro came out, I bought one and started using it daily. The experiment is ongoing and I thought it was time to reflect upon it further.

Before I begin, I need to clarify my goals. When I started this experiment, some people objected that I could get a hybrid (laptop/tablet). That is definitively true, and it would be more “practical”. However, it would not be much of an experiment. I am deliberately trying to push the envelope, to do something that few do. So I am not trying to be “practical”.

And, indeed, using an iPad Pro for work is still an oddity. Relying solely on an iPad Pro for serious work is even rarer. I am currently in Ottawa reviewing grant applications. There are a few dozens computer-science researchers (mostly professors) around me. The only other person with an iPad is a Facebook researcher, and he seems to be using the iPad only for reading applications, otherwise he appears to be using a laptop.

In my department, other faculty members have iPad Pros, but I think only one of my colleagues use it seriously. Other colleagues do not appear to use these tablets for work when they have them. I am not sure.

  1. The main impact on my work at relying mostly on a tablet is that I am always focusing on one or two applications at a time. I recall finding it really cool, back in the days, when a Unix box would allow me to have 50 windows open at a time. I think having many windows open is akin to have many different physical files opened on your desk. It is distracting. For example, on a laptop, I would write this blog post while having an email window open, probably a couple of text editors with code. Yes, you can work in full screen mode with a laptop, and I try to do it, but I tend to unavoidably revert back to the having dozens of applications on my screen. Laptops just make it too convenient to do multiple things at once. If you need to concentrate on one thing for a long time, you really want to have just one clean window, and a tablet is great at that. On this note, it is also why I prefer to program in a text editor that has as few distractions as possible. I can write code just fine in Eclipse or Visual Studio, and for some tasks it is really the best setup, but it often leaves me distracted compared to when I work with single full-screen editor with just one file opened.
  2. Though I could not prove it, I feel that using a tablet makes me a better “reader”. Much of my work as a university professor and researcher involves reading and commenting on what other people are doing. The fact that I am entice to concentrate on one document, one task, at a time forces me to be more thorough, I think.
  3. As far as I can tell, programming seriously on a tablet like an IPad Pro is still not practical. However, there are decent ssh clients (I use Shelly) so that if you master Unix tools like vim, emacs, make, and the like, you can get some work done.
  4. I’d really want to push the experiment to the point where I no longer use a keyboard. That’s not possible at this time. I like the keyboard that Apple sells for the iPad Pro 2018. There is a major upside: the keyboard is entirely covered so it is not going to stop working because you spilled some coffee on it.
  5. Generally, most web applications work on a tablet, as you would expect. However, it is quite obvious that some of them were not properly tested. For example, I write research papers using a tool called Overleaf. However, I cannot make the shortcuts work. At the same time, it is really surprising how few problems I have. I think that the most common issues could be quickly fixed if web developers did a bit more testing on mobile devices. Evidently the fact that developers rely on laptops and desktops explains why things work better on laptops and desktops.
  6. At least on Apple’s ios, working with text is still unacceptably difficult at times. Pasting text without the accompanying formatting is a major challenge. Selecting large blocks of text is too hard.

My final point is that working with an iPad is more fun than working with a laptop. I cannot tell exact why that is. I’d be really interested in exploring this “fun” angle further. Maybe it is simply because it is different, but it is maybe not so simple. My smartphone is “fun” even if it is old and familiar.

OCaml Planet: Senior Haskell / Full Stack Developer at PRODA Ltd (Full-time)

things magazine: Cabbages and things

Berberian Sound Studio becomes a stage play. See our earlier post, Aural Excitements / all the strange things are collated and presented by 41Strange, which is a veritable dark cave of the uncanny and the esoteric in 20th century media … Continue reading

The Sirens Sound: The Popular Indie Rock Band: Imagine Dragons

Have you ever heard about Imagine Dragons? Imagine Dragons is an indie band which takes rock as the root of their music genre. Imagine Dragons rise its popularity after their singles became a big hit in the music industry and some music charts, such as Billboards and MTV.

About The Imagine Dragons
Imagine Dragons is an American indie band. The band originally comes from Las Vegas, Nevada. There are four members in Imagine Dragons. Imagine Dragons has changed its personnel from the first the band is formed. The current personnel is Dan Reynolds, Ben McKee, Wayne Sermon, and Daniel Platzman.
Dan Reynolds acts as the leading vocal, rhythm guitar, percussions, drum, keyboards, and piano. Wayne Sermon serves as the backing vocals, guitar, and mandolin. Ben McKee plays the piano, keyboards, bass, and the backing vocal. While Daniel Platzman plays the percussion, guitar, keyboards, violin, and drum.
Some famous singles of Imagine Dragons are Demons, Radioactive and It’s Time. The single “Radioactive” announced as the “biggest rock hit of the year” by the Rolling Stone. MTV called Imagine dragons as the “Biggest Band of 2017”. Billboard also called Imagine Dragons as the “Breakthrough band of 2013.”

The Masterpieces
Imagine Dragons has released their first EP in 2010. The EP was named Imagine Dragons and Hell and Silence which were recorded in Las Vegas. The third EP was released in 2011. It was entitled Its Time. The first album, Night Visions was finished in 2012. The recording process took place in Studio X in the Palms Casino Resort.
The Night Visions was released on 4 September 2012. The album is topped in the Billboard Music Award in the first position. The album won the Billboard Music Award and also nominated in the Juno Award for International Album of the Year. The second album is Smoke + Mirrors was released in 2016. Imagine Dragons also did the single for the soundtrack of the Transformers: Age of Extinction. The soundtrack uses the single with the title Battle Cry.

The third album was entitled Evolve which was released on the 23 June 2017. Imagine Dragons was released a single called “Whatever It Takes” few months after releasing their third album. Later on, this song won MTV Video Music Award for Best Rock Video. The fourth album was entitled Origin which was released on 9 November 2018.
Imagine Dragons is an indie rock band which gains its popularity in agen sbobet and worldwide. They have 4 albums and many top singles which also won various awards.

The Shape of Code: Offer of free analysis of your software engineering data

Since the start of this year, I have been telling people that I willing to analyze their software engineering data for free, provided they are willing to make the data public; I also offer to anonymize the data for them, as part of the free service. Alternatively you could read this book, and do the analysis yourself.

What will you get out of me analyzing your data?

My aim is to find patterns of behavior that will be useful to you. What is useful to you? You have to be the judge of that. It is possible that I will not find anything useful, or perhaps any patterns at all; this does not happen very often. Over the last year I have found (what I think are useful) patterns in several hundred datasets, with one dataset that I am still scratching my head over it.

Data analysis is a two-way conversation. I find some patterns, and we chat about them, hopefully you will say one of them is useful, or point me in a related direction, or even a completely new direction; the process is iterative.

The requirement that an anonymized form of the data be made public is likely to significantly reduce the offers I receive.

There is another requirement that I don’t say much about: the data has to be interesting.

What makes software engineering data interesting, or at least interesting to me?

There has to be lots of it. How much is lots?

Well, that depends on the kind of data. Many kinds of measurements of source code are generally available by the truck load. Measurements relating to human involvement in software development are harder to come by, but becoming more common.

If somebody has a few thousand measurements of some development related software activity, I am very interested. However, depending on the topic, I might even be interested in a couple of dozen measurements.

Some measurements are very rare, and I would settle for as few as two measurements. For instance, multiple implementations of the same set of requirements provides information on system development variability; I was interested in five measurements of the lines of source in five distinct Pascal compilers for the same machine.

Effort estimation data used to be rare; published papers sometimes used to include a table containing the estimate/actual data, which was once gold-dust. These days I would probably only be interested if there were a few hundred estimates, but it would depend on what was being estimated.

If you have some software engineering data that you think I might be interested in, please email to tell me something about the data (and perhaps what you would like to know about it). I’m always open to a chat.

If we both agree that it’s worth looking at your data (I will ask you to confirm that you have the rights to make it public), then you send me the data and off we go.

MattCha's Blog: Rough & Elegant: Two Interesting Yesheng/ Wild Tea Samples from Teapals


Last month I put my very first order through at Teapals.  I picked up what turned out to be the last of these 2003 Shuangjiang Mengku Da Xue Shan Wild puerh bricks (I haven’t tired them yet) and in the order I received a few interesting complimentary samples of wild puerh by Mr. Teapal himself, KL Wong.

I believe these came from KL Wong’s personal collection because I don’t see them for sale on his website.  Usually, I don’t write about samples that are never offered for sale in the Western market but these two are both a bit unique in their own ways.  One of them is one of the most ethereal and elegant wilds I have tried where the other is from Western Xishuangbanna, the Naka region, an area that we don’t tend to see a lot of yesheng.  So I thought, for eduational purposes, I would type some notes up on these two interesting wild teas.  Why is most yesheng/ wild that reaches western markets from Northern Xishuangbanna or from the Yiwu/ Western Xishuangbanna producing areas?  This, I don’t know?

2008 Teapals Small Factory Naka Yesheng

The dry leaves smell of hay and distant woody sweetness.

The first infusion has a buttery entry with a soft and woody base with a creamy sweetness.  There is a faint cloud of fruit sweetness on the breath- almost like a tart cherry taste but not really even tart.

The second infusion I taste a more woody less buttery taste.  It has a straw almost dry wood with a layer of sweetness.  The moutheel is mildly sticky the breath is of fruits of faint strawberries and wild cherry, it’s more of a creamy sweetness almost candy floss.  The tastes are faint and delicate but are supported in the woody base.

The third infusion is a touch more woody with a pungent almost wood bark with a long faint fruit aftertaste layered into faint candy floss.  This infusion has a bit of a brackish/ smoke tinge to it.

The fourth infusion starts a touch smoky and wood.  The mouthfeel is mainly on the tongue.  The Qi is pretty mild with a soft warmth generated in the body the spine feels nice and loose.  This infusion gets noticeably smoky and slightly rough with a baked apple sweetness and candy floss covered in a woody slight smoky taste.

The fifth infusion has a nice woody, deeper slight smoky, layered dry apple nuance.  Has a slight cooling taste.  The qi is mild and a bit relaxing now.  I can feel it behind the eyes.  This tastes like a pretty traditionally processed wild.  Has a bit of a fruit aftertaste lingering there.

The sixth infusion starts a bit smoky with nuances of wood and almost fruit sweetness.  The qi is quite mild but warming in the body.

The seventh infusion is a bit warm fruity smoky onset slight sandy tongue feel.  Lingering smoke and slight fruit aftertaste.  Long slight sweet, slight smoky taste.  Mild Qi sensation- warming in body and can feel energy in the diaphragm.

Eighth is much the same.  With a faded talic berry nuance. Energy feels clean and natural.

I steep this for a few more infusions and get a nice smoked plum flavor in an almost sandy tongue feeling.

This is standard enough wild for an everyday drinker, clean in the body, but no other overly apparent strengths and an old-school a traditional processed vibe.

I enjoy this tea for what it is- nice little traditionally processed wild from a region I have not yet sampled a wild tea from, Naka.

2018 Teapals Early Spring Wild Yiwu

Dry dark coloured leaves have a heavy dense deep floral syrupy sweetness with lingering candy odour.

The first infusion arrives with a light water vacuous onset with faint icing sugar sweetness and faint woodiness.  The taste is delicate and the minute’s later aftertaste is a mild woody coolness with distant long creamy sweetness.

The second infusion has more sweetness in its approach and a soft and elegant long dry woodiness that stretches into a faint bubble gum flavor long on the breath.  This tea is very elegant and long tasting.  Its mouthfeeling is very soft and mildly viscus on the tongue.  It feel like it reaches deep into the throat but with graceful simulation there. The Qi in here is very nice, happy feeling, light-floating head sensation, lax body, smiles. 

The third infusion starts off very faint, almost woody with a faint underlying bubble gum sweetness and icing sugar.  The profile is very long, subtle, graceful, super mild.  A soft returning coolness in the deep throat.

The fourth develops some depth with a soapy bubble gum almost grape Thrills gum like taste that pops initially before embracing the thin dry wood tastes.  This initial taste is strung out slowly in the aftertaste and breath.  Deep relaxation you can feel it in the heart slowing- Qi is very very nice.  Big relaxing and gooey body sensation Qi here.

The fifth infusion has a woody almost icing sugar onset with dry woods to follow.  It develops a fluffier cotton in mouth feeling and has a woodier almost briny woody and faint floral suggestion.  Deep long faint sweetness of bubble gum on breath.  Floating.

The sixth infusion is developing a subtle chestnut richness with thin dry wood taste and faint long barely bubble gum tastes.  The taste is still very elegant but I wouldn’t say it’s weak or thin but rather fine.  The seventh is much the same as the flavours develop a warmer almost nutty wood nuance.  The mouthfeel is soft, fluffy, and barely sticky.

The eighth and ninth have a richer nutty wood taste with a barely creamy sweetness.  Overall the taste is sweet but hard to explain like the sweetness is coming from the nuttiness.  Faint creamy sweetness in breath.  The mouthfeel develops a mild astringency now.

The 10th and 11th have a more round, almost metallic woody taste and deep faint long sweetness.  There is faint floral lingering- the mouthfeel is more astringent here.  The taste is mainly woody now slight astringency with metallic and faint breath sweetness.

I had to step away from the tea table very early morning and, unfortunately, didn’t return until sunset.  This one seemed to start to fade before I stepped away.  I put it in an overnight infusion and got some really nice dense, rich, syrupy fruit stuff.  I put it in another overnight infusion and got much the same which tells me this one could have probably steeped out nicely.

This wild had real nice qi, it was one of the most fragile wild tea I have sample so far.  Much more delicate than any other Yiwu wild I have tried in the past (here and here).  Nice to drink now… love the qi in there.

Peace

OCaml Planet: Full Stack Software Engineer (Haskell experience preferred) at Interos Solutions (Full-time)

Michael Geist: Flawed Arguments and Inappropriate Analogies: Why Netflix Taxes and Cancon Requirements Should be Rejected

CBC President Catherine Tait recently sparked a firestorm with comments to an industry conference that likened Netflix, the popular online video service, to the British Raj in India and French in Africa, warning about “imperialism and the damage that it can do to local communities.” The comments were rightly criticized as shockingly inappropriate, as if any video service can be reasonably compared to the subjugation of millions.

My Hill Times op-ed notes that some in the Canadian creator community rushed to defend Tait, however, viewing the comments as a strong assertion for Netflix regulation, the creation of a “level playing field”, and the need for all stakeholders to contribute to the broadcast system. Supporters of Netflix taxes and content requirements – who were joined in the Hill Times last week by Sheila Copps – present a vision of Canadian content at risk without regulatory intervention, leading to the loss of Canada’s “authorial voice” from film and television production.

Industry leaders promised that regulation will come, yet a closer examination of the arguments for Netflix taxes and Cancon requirements reveal that they are based on five deeply flawed premises.

First, there is no existential crisis for Canadian content film and television production. According to the most recent numbers from the Canadian Media Producers Association, the total annual value of the Canadian film and television production sector exceeds $8-billion, its largest amount ever. Spending on Canadian content production has hit an all-time high at $3.3-billion.

In fact, the sector has experienced a huge increase in foreign investment in Canadian production. Before Netflix began investing in original content in 2013, total foreign investment (including foreign location and service production, Canadian theatrical production, and Canadian television) was $2.2-billion. That number has since more than doubled to nearly $4.7-billion.

Second, Cancon regulations are a poor proxy for Canada’s “authorial voice”. The not-so-secret reality of the Canadian system is that Canadian authors are often missing entirely from productions. Films and shows based on Canadian fiction do not count toward meeting the necessary points for Cancon accreditation and even Canadian screenplay writers are not a mandatory requirement.

Unlike many other countries, which adopt flexible standards to determine domestic content, Canada’s rigid approach means that generic police or courtroom dramas may qualify as Canadian content while the television adaptation of Margaret Atwood’s Handmaid’s Tale does not. Moreover, co-production agreements with dozens of countries ensure that many productions qualify for Canadian support despite limited Canadian participation.

Third, if the playing field lacks balance, it is the regulated sector – not Netflix – that is the prime beneficiary. The broadcasting and broadcast distribution sectors receive a wide range of regulatory benefits, making their mandated contributions effectively a quid pro quo for policies such as simultaneous substitution rules, which allow Canadian broadcasters to replace foreign signals and advertising with their own, and copyright retransmission rules, which allow for the retransmission of signals without infringing copyright.

Unlike Netflix, the regulated sector also benefits from must-carry regulations, which mandate the inclusion of many Canadian channels on basic cable and satellite packages; market protection, which has shielded Canadian broadcasters from foreign competition such as HBO or ESPN for decades; and eligibility for Canadian funding programs and tax credits, for which many foreign services are frequently ineligible.

Fourth, notwithstanding the oft-heard insistence that everyone must contribute the system – Canadian Heritage Minister Pablo Rodriguez has declared “there is no free ride” –contributions to the system stem from an era of scarcity, in which broadcasting featured limited channels using public spectrum with licences granted to a handful of companies. It was that privileged access that led to contributions, not mere participation in the Canadian market.

That is why broadcasters must feature Canadian programming, but movie theatres do not. Or why broadcast distributors contribute a percentage of revenues to support Cancon, but book stores face no such requirement. Indeed, mandated contributions to an economic sector is the exception, not the rule: Canada does not require McDonald’s to contribute a portion of its revenues to support Canadian farmers or Nike to sell a certain percentage of Canadian-made shoes. In an era of abundance in which Internet streaming does not rely on scarce spectrum, the justification for a mandated contributions falls apart.

Fifth, Canadian content is already readily available and easily “discoverable” on the Netflix service. Alongside “official” Cancon, there are programs filmed in Canada, starring Canadian actors, or featuring Canadian stories. Some might argue that only official Cancon counts. Regardless of how it is measured, however, the reality is that Netflix already has a sizeable Canadian library, giving subscribers the option to watch hundreds of hours of Canadian content with little more than a simple search for “Canada.”

Cancon support remains an important ingredient in a vibrant Canadian cultural sector. Yet support such as grants, tax benefits, and other measures should come from general revenues as a matter of public policy, not through cross-subsidization models grounded in flawed arguments and inappropriate analogies.

The post Flawed Arguments and Inappropriate Analogies: Why Netflix Taxes and Cancon Requirements Should be Rejected appeared first on Michael Geist.

OCaml Weekly News: OCaml Weekly News, 12 Feb 2019

  1. PSA: cohttp 2.0 removes old ocamlfind subpackage aliases
  2. Interesting OCaml Articles
  3. Major Release of Base64 / Article
  4. Orsetto: structured data interchange languages (preview release)
  5. OCaml 4.08.0+beta1
  6. OCaml meetup in SF on 2/12
  7. Is anyone doing Design by Contract in OCaml?
  8. Dune and Multicore
  9. Other OCaml News

OCaml Planet: Formal proof and analysis of an incremental cycle detection algorithm

As part of my PhD at Gallium, I have been working on formally proving OCaml programs using Coq. More precisely, the focus has been on proving not only that a program is functionally correct (always compute the right result), but also does so in the expected amount of time. In other words, we are interested in formally verifying the asymptotic complexity of OCaml programs.

In this blog-post, I’m happy to report on our latest endeavour: the verification of the correctness and (amortized) complexity of a state-of-the art incremental cycle detection algorithm.

This is joint work with Jacques-Henri Jourdan and my advisors François Pottier and Arthur Charguéraud.

The initial motivation for this work comes from the implementation of Coq itself! More specifically, efficiently checking the consistency of universe constraints that result from the type-checking phase is a difficult problem, that can be seen as an incremental cycle detection problem. A few years ago, Jacques-Henri reimplemented the part of Coq responsible for this, following a state-of-the-art algorithm published by Bender, Fineman, Gilbert and Tarjan. They prove that using their algorithm, adding an edge in a graph with m edges and n nodes while ensuring that after each addition the graph remains acyclic has amortized asymptotic complexity O(min (m1/2, n2/3)). In the common case where the graph is sparse enough, this is equivalent to O(√m).

Jacques-Henri’s implementation resulted in a nice speedup in practice, but it is not so easy to convince oneself that it indeed has the right asymptotic complexity in all cases (in other words, that it does not have a “complexity bug”). The amortized analysis required to establish the O() bound on paper is quite subtle, and for instance relies on a parameter Δ computed at runtime that looks quite magical at first glance.

In the work I’m presenting here, we try to untangle this mystery. We give a formally verified OCaml implemention for a (slightly modified) incremental cycle detection algorithm from Bender et al. We prove that it is not only correct, but also satisfies the expected complexity bound.

Note that this is not yet the exact algorithm that is currently part of Coq’s implementation, but still an important milestone on the way there! (Coq implements the variant by Bender et al. that additionally maintains “strong components”. We believe it could be implemented and verified in a modular fashion, by combining the algorithm we present here and a union-find data structure.)

Here’s the draft (currently under submission), and a link to the OCaml code and Coq proofs:

http://gallium.inria.fr/~fpottier/publis/gueneau-jourdan-chargueraud-pottier-2019.pdf

https://gitlab.inria.fr/agueneau/incremental-cycles

We exploit Separation Logic with Time Credits to verify the correctness and worst-case amortized asymptotic complexity of a state-of-the-art incremental cycle detection algorithm.

Happy reading!

Tea Masters: Théine ou Sérotonine


Ces vacances du Nouvel An Chinois tombèrent bien pour moi pour lire le nouveau livre de Michel Houellebecq, Sérotonine. Sans trop divulguer de l'histoire, ce roman n'est pas très joyeux. Il raconte l'histoire d'un Français de 46 ans (2 ans de moins que moi), diplômé d'une bonne école de l'enseignement supérieur (pareil) et qui a connu une succession d'histoires d'amour avec des hauts et des fins malheureuses (qui n'en a pas connu à un moment de sa vie?). Il devient dépressif et a besoin de prendre un médicament pour lui apporter la sérotonine nécessaire à son vague à l'âme.

Malgré ce résumé pas bien gai, ce livre se lit très facilement et avec un certain plaisir, car il nous permet de mieux comprendre l'état d'esprit de notre époque et comment fonctionne, ou plutôt comment ne fonctionne plus la société française. Et pourtant, le narrateur devrait tout avoir pour être heureux. Il est un haut fonctionnaire bien payé et vit dans un certain luxe. Mais il se rend compte que son boulot n'a pas d'impact positif pour ceux qu'il est censé aider et, de tout de façon sa carrière l'intéresse moins que sa vie sentimentale. Or, il pense avoir raté le coche pour se marier et fonder une famille.
Le roman commence par la prise de ce médicament le matin, en même tant que la tasse de café et la cigarette! Ces trois substances semblent bien aller ensemble. On passe d'une addiction légère à une autre, de plus en plus dure. La cigarette a pour effet de détruire les papilles. Elle oblige à déguster des boisssons au goût très prononcé, comme le café, de préférence serré, pour faire un effet.

Certes, le thé contient aussi un excitant un peu addictif, la théine. En fait, d'un point de vue chimique, c'est exactement comme de la caféine. Mais le thé, bien compris et bien pratiqué, ne nous entraine pas sur un chemin de la destruction, mais sur celui de la beauté et de la vie!
J'en veux pour preuve ou pour exemple mon blog. Le thé est, par essence, un produit semi-fini très sensible à la manière dont on le prépare. Il va y avoir des échecs, comme les thés que je ratais systématiquement avant de prendre des cours de thé. Mais les succès sont délicieux. Cela vous motive à apprendre, à favoriser la qualité sur la quantité, à être concentré sur le moment de l'infusion et de la dégustation... En effet, le thé est surtout un breuvage très fin qui ne s'apprécie pleinement que dans un certain état de calme et d'attention.

Aussi, le thé ne conduit ni à la joie hopla boum, ni à de franches rigolades! Et c'est vrai que l'on a du mal à apprécier le thé si on va mal au départ. Le thé n'est pas un remède à tout comme certains aiment le présenter (contre le cancer, pour maigrir...). Il ne remplace pas les amis, l'amour, la famille, les enfants, de vraies vacances...

Mais un Chaxi reste un moment formidable pour transformer une simple infusion en un acte créatif, artistique en connection avec la nature (par les feuilles de thé et la plante qui ornera le Chaxi) et avec soi-même en êtant bien concentré sur tous ses gestes. Tant de portes s'ouvrent, les unes plus intéressantes que les autres. Pour suivre sa voie, il suffit alors d'aller vers la lumière!
A tous les lecteurs déprimés par Sérotonine: au lieu de vous saoûler aux alcools forts, enivrez-vous de bons thés!

Volatile and Decentralized: Why I'm leaving Google for a startup

After more than eight years at Google, I'm joining XNOR.ai, a small startup developing AI for embedded devices.

Check out my blog post on Medium here.

OCaml Planet: What’s new for Alt-Ergo in 2018 ? Here is a recap !

After the hard work done on the integration of floating-point arithmetic reasoning two years ago, 2018 is the year of polymorphic SMT2 support and efficient SAT solving for Alt-Ergo. In this post, we recap the main novelties last year, and we announce the first Alt-Ergo Users’ Club meeting.

An SMT2 front-end with prenex polymorphism

As you may know, Alt-Ergo’s native input language is not compliant with the SMT-LIB 2 input language standard, and translating formulas from SMT-LIB 2 to Alt-Ergo’ syntax (or vice-versa) is not immediate. Besides its extension with polymorphism, this native language diverges from SMT-LIB’s by distinguishing terms of type boolean from formulas (that are propositions). This distinction makes it hard, for instance, to efficiently translate let-in and if-then-else constructs that are ubiquitous in SMT-LIB 2 benchmarks.

In order to work closely with the SMT community, we designed a conservative extension of the SMT-LIB 2 standard with prenex polymorphism and implemented it as a new frontend in Alt-Ergo 2.2. This work has been published in the 2018 edition of the SMT-Workshop. An online version of the paper is available here. Experimental results showed that polymorphism is really important for Alt-Ergo, as it allows to improve both resolution rate and resolution time (see Figure 5 in the paper for more details).

Improved SAT solvers

We also worked on improving SAT-solving in Alt-Ergo last year. The main direction towards this goal was to extend our CDCL-based SAT solver to mimic some desired behaviors of the native Tableaux-like SAT engine. Generally speaking, this allows a better management of the context during proof search, which prevents from overwhelming theories and instantiation engines with useless facts. A comparison of this solver with Alt-Ergo’s old Tableaux-like solver is also done in our SMT-Workshop paper.

SMT-Comp and SMT-Workshop 2018

As emphasized above, we published our work regarding polymorphic SMT2 and SAT solving in SMT-Workshop 2018. More generally, this was an occasion for us to write the first tool paper about Alt-Ergo, and to highlight the main features that make it different from other state-of-the-art SMT solvers like CVC4, Z3 or Yices.

Thanks to our new SMT2 frontend, we were able to participate to the SMT-Competition last year. Naturally, we selected categories that are close to “deductive program verification”, as Alt-Ergo is primarily tuned for formulas coming from this application domain.

Although Alt-Ergo did not rank first, it was a positive experience and this encourages us to go ahead. Note that Alt-Ergo’s brother, Ctrl-Ergo, was not far from winning the QF-LIA category of the competition. This performance is partly due to the improvements in the CDCL SAT solver that were also integrated in Ctrl-Ergo.

Alt-Ergo for Atelier-B

Atelier-B is a framework that allows to develop formally verified software using the B Method. The framework rests on an automatic reasoner that allows to discharges thousands of mathematical formulas extracted from B models. If a formula is not discharged automatically, it is proved interactively. ClearSy (the company behind development of Atelier-B) has recently added a new backend to produce verification conditions in Why3’s logic, in order to target more automatic provers and increase automation rate. For certifiability reasons, we extended Alt-Ergo with a new frontend that is able to directly parse these verification conditions without relying on Why3.

Improved hash-consed data-structures

As said above, Alt-Ergo makes a clear distinction between Boolean terms and Propositions. This distinction prevents us from doing some rewriting and simplifications, in particular on expressions involving let-in and if-then-else constructs. This is why we decided to merge Term, Literal, and Formula in a new Expr data-structure, and remove this distinction. This allowed us to implement some additional simplification steps, and we immediately noticed performance improvements, in particular on SMT2 benchmarks. For instance, Alt-Ergo 2.3 proves 19548 formulas of AUFLIRA category in ~350 minutes, while version 2.2 proves 19535 formulas in ~1450 minutes (time limit was set to 20 minutes per formula).

Towards the integration of algebraic datatypes

Last Autumn, we also started working on the integration of algebraic datatypes reasoning in Alt-Ergo. In this first iteration, we extended Alt-Ergo’s native language to be able to declare (mutually recursive) algebraic datatypes, to write expressions with patterns matching, to handle selectors, … We then extended the typechecker accordingly and implemented a (not that) basic theory reasoner. Of course, we also handle SMT2’s algebraic datatypes. Here is an example in Alt-Ergo’s native syntax:

type ('a, 'b) t = A of {a_1 : 'a} | B of {b_11 : 'a ; b12 : 'b} | C | D | E

logic e : (int, real) t
logic n : int

axiom ax_n : n >= 9

axiom ax_e:
  e = A(n) or e = B(n*n, 0.) or e = E

goal g:
  match e with
   | A(u) -> u >= 8
   | B (u,v) -> u >= 80 and v = 0.
   | E -> true
   | _ -> false
  end 
  and 3 <= 2+2

What is planned in 2019 and beyond: the Alt-Ergo’s Users’ Club is born!

In 2018, we welcomed a lot of new engineers with a background in formal methods: Steven (De Oliveira) holds a PhD in formal verification from the Paris-Saclay University and the French Atomic Energy Commission (CEA). He has a master in cryptography and worked in the Frama-C team, developing open-source tools for verifying C programs. David (Declerck) obtained a PhD from Université Paris-Saclay in 2018, during which he extended the Cubicle model checker to support weak memory models and wrote a compiler from a subset of the x86 assembly language to Cubicle. Guillaume (Bury) holds a PhD from Université Sorbonne Paris Cité. He studied the integration of rewriting techniques inside SMT solvers. Albin (Coquereau) is working as a PhD student between OCamlPro, LRI and ENSTA, focusing on improving the Alt-Ergo SMT solver. Adrien is interested in verification of safety properties over software and embedded systems. He worked on higher-order functional program verification at the University of Tokyo, and on the Kind 2 model checker at the University of Iowa. All these people will consolidate the department of formal methods at OCamlPro, which will be beneficial for Alt-Ergo.

In 2019 we just launched the Alt-Ergo Users’ Club, in order to get closer to our users, collect their needs, and integrate them into the Alt-Ergo roadmap, but also to ensure sustainable funding for the development of the project. We are happy to announce the very first member of the Club is Adacore, very soon to be followed by Trust-In-Soft and CEA List. Thanks for your early support!

Interested to join? Contact us: contact@ocamlpro.com

 

things magazine: Creativity and Value

I am not the next big thing, on creativity and the value of screaming into the void (via MeFi) / paintings by Andy Dixon (via HiFructose) / maps by Scott Reinhard / porcelain globes by Loraine Rutt. Some more info … Continue reading

Perlsphere: January report of the Perl 6 Development Grant of Jonathan Worthington

Jonathan writes:

January was a busy and productive month for my Perl 6 grant work.

Back in November, I was working on allowing us to lower lexical variables into locals. This is possible when they are not closed over, and allows for generation of more efficient code, which is in turn much easier for backends - MoarVM, the JVM, and JavaScript - to deal with in their own optimizers. It can also shorten the lifetimes of objects, leading to better memory performance. I completed and merged the almost-complete work to be able to lower normal variable declarations (such as my $a = 42). This will combine well with the ongoing work on escape analysis, since it will be able to eliminate many allocations of Scalar containers.

The next step was to take on the lowering of $_. In Perl 6.d, we made this a normal lexical rather than a dynamic lexical (available through CALLER::) in order to allow further analysis and optimization. Relatively few Perl 6 users were aware of its dynamic nature anyway, and fewer still making use of it. With the topic variable of course showing up in numerous common idioms, it would be a pity to then have people avoid them in the name of performance, due to the dynamic nature of $_ impeding analysis. Thus the change in 6.d.

This month I worked on implementing the most immediate optimization that this enabled: being able to also lower $_ into a local. In a loop like for 1..10_000_000 { something() }, previously we would have to allocate an Int object for every single iteration. Since $_ was dynamic, we could not be sure that something() - or anything it in turned called - would not access it. Under the new semantics, we can lower it into a local, and then dead code analysis can see that the boxing is useless and discard it, saving a memory allocation every iteration. Even were it used, for example in an array index, we now have opened the door to being able to use escape anslysis in the future to also avoid the allocation.

This work had some amount of fallout, and turned up some rather dubious semantics around regexes matching against the topic variable when in boolean or sink context. This included some curious “at a distance” behavior, where things either worked by a happy accident, or could potentially match against completely unexpected data further down the callstack thanks to the thing they were expected to match against being undefined! I proposed a cleaner set of behaviors here and, with a lack of objections, implemented them.

Along with these optimizations, I also implemented more aggressive scope flattening in Rakudo’s static optimizer. When we can show that a particular scope’s existence would not be missed - such as the body of an if statement which doesn’t have any variables in it that get closed over - we can flatten it into the surrouding block. This is a bit cheaper at runtime and leads to more chances to do lexical to local lowering, which - as already mentioned - is the gateway to further optimizations.

Back over in MoarVM, I continued my work on escape analysis and scalar replacement. I took a small detour to implement retention of deoptimization information in specialized bytecode, so we can recover it when inlining. This will allow us to more aggressively optimize code that we inline. I didn’t yet enable the full set of optimizations we might do there, but I was able to enable dead instruction elimination, which can save some work (for example, if we see a return value is unused, we can strip out the work that produces it). This more detailed data was also required for the analysis algorithm that works out which object allocations that we eliminate as part of a scalar repalcement need to be restored when deoptimizing. I got most of that worked out, with a few issues left before it could be merged. (Spoiler: those were resolved in early February and it was merged.)

The lexical to local lowering optimization in Rakudo results in lower quality information being available to debuggers, and with the scope of it greatly expanding with my recent work this would have become quite noticeable. I implemented a new mechanism to retain symbol mappings in order to counteract that.

Further to this work, I worked on 8 other smaller issues.

15:50   Lexical to local lowering, including of $_ where possible
07:59   More aggressive block flattening
05:33   Keep deopt information when inlining, allowing more dead code elimination
07:59   Implement deoptimization of scalar-replaced allocations
00:56   Fix sometimes-missing redeclaration error
01:47   Make exceptions doing X::Control be thrown as control exceptions
01:06   Fix hang when a Channel with a reactive subscription was closed and then drained.
00:48   Look into a crash in t/04-nativecall/06-struct.t, comment on issue
01:59   Track down a runaway memory use spesh plugin bug and fix it
00:42   Fix inaccurate reporting of line numbers in simple routines
06:04   Preserve symbol names for debug in lexical to local lowering
00:39   Fix reporting of Failure where exception has no message method
03:18   Analyze and fix an optimizer bug with when inside a loop

Total: 54:39

Of which were covered by previous grant period: 48:30
Of which were covered by current grant period: 6:09
Remaining funding-approved hours on current grant period: 160:51
Remaining community-approved hours on current grant period: 326:51

Perlsphere: Request for Comments: Dave Rolsky's class at TPC 2019

Dave Rolsky taught classes at YAPC/TPC for a number of years and he was compensated for it. As Dave is a new member of the TPF Board and TPF organizes TPC, there is a potential conflict of interest. Dave wishes to continue his class.

The typical rate paid by students is in the range of $150-$175 ($165 in 2018). TPC takes a portion of it to cover expense such as venue and the remaining part goes to Dave. Other costs such as transport and accommodation were paid by Dave.

Dave also participates in TPC as a volunteer, working on the website for 2019. Dave is not involved in talk or class selection, nor is he in any Slack channel/email list/etc., where this will be discussed.

If you have an opinion on whether TPC should have his class or not, or if you have a plan to teach at TPC which may conflict with Dave's class, please comment here or email me at makoto@perlfoundation.org.

OCaml Planet: Learning a New Language

Generally, every program I write, regardless of what useful thing it actually does, and regardless of what programming language it is written in, has to do certain things, which usually includes

  • Importing a library and calling functions contained within that library
  • Handling datatypes such as converting between strings and integers, and knowing when this is implicit or explicit, how dates and times work, and so on
  • Getting command line parameters or parsing a configuration file
  • Writing log messages such as to files or the system log
  • Handling errors and exceptions
  • Connecting to services such as a database, a REST API, a message bus etc
  • Reading and writing files to the disk, or to blob storage or whatever it’s called this week
  • Spawning threads and processes and communicating between them
  • Building a package whether that’s a self-contained binary, an RPM, an OCI container, whatever is native to that language and the platform

It’s easy to find examples of most of these things using resources such as Rosetta Code and my first real program will be a horrific cut and paste mess – but it will get me started and I’ll soon refine it and absorb the idiomatic patterns of the language and soon be writing fluently in it, and knowing my way around the ecosystem, what libraries are available, which are the strengths and weaknesses of the language, the libraries, the community and so on. Once you have done this a few times it becomes easy and you can stop worrying so much about being a “language X programmer” and concentrate on the important stuff, which is the problem domain you are working in.

The Sirens Sound: The Rising Star: The xx

If you are a fan of pop songs and want to hear a different pop genre, you should listen to an indie pop band, called The xx. The xx is an indie pop which coming from Wandsworth, London. The xx inserts various instruments for their music, such as echoed guitar, electronic beats, soundscape, and bass.

About The xx
The xx comes from Wandsworth, London, England. It takes pop, R&B, and electro for their root. It was established in 2005 until the present. It has three active members. They are Romy Madley Croft, Oliver Sim, and Jamie Smith. Baria Qureshi was used to be the member of The xx but he no longer joins in this indie band.

Romy Madley Croft is the main vocalist and guitarist. Oliver Sim serves as the vocalist and bassist. The third member, Jamie Smith or also known as Jamie xx handles beats, record production, and MPC. The xx signed with Young Turks and XL as their music label.

How does The xx form?
The xx started in the Elliott School. All the members, including the former member, was studied at the same school, which is Elliott School. That’s how the band was formed. Romy Madley Croft and Oliver Sim were studied in the same years in Elliott School. Then, Baria Qureshi came at the same time in that year. A year later, Jamie Smith joined in the same school and the story began. These talented young men started to make a demo for their band and posted it on the Myspace and agen bola. Soon, the music demo grabbed the attention of Young Turks music label.

The Albums
With the help of Young Turks and Rodaidh McDonald as the music producer, their first album is released in 2009. The album is a hit and reached the UK Album Charts as. It topped the chart in the third position. It topped in the first position in the Guardian’s. The album also won the Mercury Prize in 2010.

The second album was released on September 5, 2012, which entitled Coexist. This album also became a hit and got positive feedbacks. It topped in the first position in UK and fifth position on Billboard. The third album, which is titled “I See You” was released on January 13, 2017. This third album also topped in Billboard 200 in the second position.

The xx is an England indie band which focuses in pop, R&B, and electro as their music. So far, the x has released three albums and all the albums were a huge success which topped famous music chart in the world.

bit-player: Divisive factorials!

The other day I was derailed by this tweet from Fermat’s Library:

Inverse factorial tweet

The moment I saw it, I had to stop in my tracks, grab a scratch pad, and check out the formula. The result made sense in a rough-and-ready sort of way. Since the multiplicative version of \(n!\) goes to infinity as \(n\) increases, the “divisive” version should go to zero. And \(\frac{n^2}{n!}\) does exactly that; the polynomial function \(n^2\) grows slower than the exponential function \(n!\) for large enough \(n\):

\[\frac{1}{1}, \frac{4}{2}, \frac{9}{6}, \frac{16}{24}, \frac{25}{120}, \frac{36}{720}, \frac{49}{5040}, \frac{64}{40320}, \frac{81}{362880}, \frac{100}{3628800}.\]

But why does the quotient take the particular form \(\frac{n^2}{n!}\)? Where does the \(n^2\) come from?

To answer that question, I had to revisit the long-ago trauma of learning to divide fractions, but I pushed through the pain. Proceeding from left to right through the formula in the tweet, we first get \(\frac{n}{n-1}\). Then, dividing that quantity by \(n-2\) yields

\[\cfrac{\frac{n}{n-1}}{n-2} = \frac{n}{(n-1)(n-2)}.\]

Continuing in the same way, we ultimately arrive at:

\[n \mathbin{/} (n-1) \mathbin{/} (n-2) \mathbin{/} (n-3) \mathbin{/} \cdots \mathbin{/} 1 = \frac{n}{(n-1) (n-2) (n-3) \cdots 1} = \frac{n}{(n-1)!}\]

To recover the tweet’s stated result of \(\frac{n^2}{n!}\), just multiply numerator and denominator by \(n\). (To my taste, however, \(\frac{n}{(n-1)!}\) is the more perspicuous expression.)


I am a card-carrying factorial fanboy. You can keep your fancy Fibonaccis; this is my favorite function. Every time I try out a new programming language, my first exercise is to write a few routines for calculating factorials. Over the years I have pondered several variations on the theme, such as replacing \(\times\) with \(+\) in the definition (which produces triangular numbers). But I don’t think I’ve ever before considered substituting \(\mathbin{/}\) for \(\times\). It’s messy. Because multiplication is commutative and associative, you can define \(n!\) simply as the product of all the integers from \(1\) through \(n\), without worrying about the order of the operations. With division, order can’t be ignored. In general, \(x \mathbin{/} y \ne y \mathbin{/}x\), and \((x \mathbin{/} y) \mathbin{/} z \ne x \mathbin{/} (y \mathbin{/} z)\).

The Fermat’s Library tweet puts the factors in descending order: \(n, n-1, n-2, \ldots, 1\). The most obvious alternative is the ascending sequence \(1, 2, 3, \ldots, n\). What happens if we define the divisive factorial as \(1 \mathbin{/} 2 \mathbin{/} 3 \mathbin{/} \cdots \mathbin{/} n\)? Another visit to the schoolroom algorithm for dividing fractions yields this simple answer:

\[1 \mathbin{/} 2 \mathbin{/} 3 \mathbin{/} \cdots \mathbin{/} n = \frac{1}{2 \times 3 \times 4 \times \cdots \times n} = \frac{1}{n!}.\]

In other words, when we repeatedly divide while counting up from \(1\) to \(n\), the final quotient is the reciprocal of \(n!\). (I wish I could put an exclamation point at the end of that sentence!) If you’re looking for a canonical answer to the question, “What do you get if you divide instead of multiplying in \(n!\)?” I would argue that \(\frac{1}{n!}\) is a better candidate than \(\frac{n}{(n - 1)!}\). Why not embrace the symmetry between \(n!\) and its inverse?

Of course there are many other ways to arrange the n integers in the set \(\{1 \ldots n\}\). How many ways? As it happens, \(n!\) of them! Thus it would seem there are \(n!\) distinct ways to define the divisive \(n!\) function. However, looking at the answers for the two permutations discussed above suggests there’s a simpler pattern at work. Whatever element of the sequence happens to come first winds up in the numerator of a big fraction, and the denominator is the product of all the other elements. As a result, there are really only \(n\) different outcomes—assuming we stick to performing the division operations from left to right. For any integer \(k\) between \(1\) and \(n\), putting \(k\) at the head of the queue creates a divisive \(n!\) equal to \(k\) divided by all the other factors. We can write this out as:

\[\cfrac{k}{\frac{n!}{k}}, \text{ which can be rearranged as } \frac{k^2}{n!}.\]

And thus we also solve the minor mystery of how \(\frac{n}{(n-1)!}\) became \(\frac{n^2}{n!}\) in the tweet.

It’s worth noting that all of these functions converge to zero as \(n\) goes to infinity. Asymptotically speaking, \(\frac{1^2}{n!}, \frac{2^2}{n!}, \ldots, \frac{n^2}{n!}\) are all alike.


Ta dah! Mission accomplished. Problem solved. Done and dusted. Now we know everything there is to know about divisive factorials, right?

Well, maybe there’s one more question. What does the computer say? If you take your favorite factorial algorithm, and do as the tweet suggests, replacing any appearance of the \(\times\) (or *) operator with /, what happens? Which of the \(n\) variants of divisive \(n!\) does the program produce?

Here’s my favorite algorithm for computing factorials, in the form of a Julia program:

function mul!(n)
    if n == 1
        return 1
    else
        return n * mul!(n - 1)
    end
end

This is the algorithm that has introduced generations of nerds to the concept of recursion. In narrative form it says: If \(n\) is \(1\), then \(mul!(n)\) is \(1\). Otherwise, evaluate the function \(mul!(n-1)\), then multiply the result by \(n\). You might ask what happens if \(n\) is zero or negative. You might ask, but please don’t. For present purposes, \(n \in \mathbb{N}\).Starting with any positive \(n\), the sequence of recursive calls must eventually bottom out with \(n = 1\).

The function can be written more tersely using Julia’s one-liner style of definition:.

mul!(n)  =  n == 1 ? 1 : n * mul!(n - 1)

The right side of the assignment statement is a conditional expression, or ternary operator, which has the form a ? b : c. Here a is a boolean test clause, which must return a value of either true or false. If a is true, clause b is evaluated, and the result becomes the value of the entire expression. Otherwise clause c is evaluated.

Just to be sure I’ve got this right, here are the first 10 factorials, as calculated by this program:

[mul!(n) for n in 1:10]
10-element Array{Int64,1}:
       1
       2
       6
      24
     120
     720
    5040
   40320
  362880
 3628800

Now let’s edit that definition and convert the single occurence of * to a /, leaving everything else (except the name of the function) unchanged.

div!(n)  =  n == 1 ? 1 : n / div!(n - 1)

And here’s what comes back when we run the program for values of \(n\) from \(1\) through \(20\):

[div!(n) for n in 1:20]
20-element Array{Real,1}:
 1                 
 2.0               
 1.5               
 2.6666666666666665
 1.875             
 3.2               
 2.1875            
 3.657142857142857 
 2.4609375         
 4.063492063492063 
 2.70703125        
 4.432900432900433 
 2.9326171875      
 4.773892773892774 
 3.14208984375     
 5.092152292152292 
 3.338470458984375 
 5.391690662278897 
 3.523941040039063 
 5.675463855030418 

Huh? That sure doesn’t look like it’s converging to zero—not as \(\frac{1}{n!}\) or as \(\frac{n}{n - 1}\). As a matter of fact, it doesn’t look like it’s going to converge at all. The graph below suggests the sequence is made up of two alternating components, both of which appear to be slowly growing toward infinity as well as diverging from one another.

Div

In trying to make sense of what we’re seeing here, it helps to change the output type of the div! function. Instead of applying the division operator /, which returns the quotient as a floating-point number, we can substitute the // operator, which returns an exact rational quotient, reduced to lowest terms.

div!(n)  =  n == 1 ? 1 : n // div!(n - 1)

Here’s the sequence of values for n in 1:20:

20-element Array{Real,1}:
       1      
      2//1    
      3//2    
      8//3    
     15//8    
     16//5    
     35//16   
    128//35   
    315//128  
    256//63   
    693//256  
   1024//231  
   3003//1024 
   2048//429  
   6435//2048 
  32768//6435 
 109395//32768
  65536//12155
 230945//65536
 262144//46189 

The list is full of curious patterns. It’s a double helix, with even numbers and odd numbers zigzagging in complementary strands. The even numbers are not just even; they are all powers of \(2\). Also, they appear in pairs—first in the numerator, then in the denominator—and their sequence is nondecreasing. But there are gaps; not all powers of \(2\) are present. The odd strand looks even more complicated, with various small prime factors flitting in and out of the numbers. (The primes have to be small—smaller than \(n\), anyway.)

This outcome took me by surprise. I had really expected to see a much tamer sequence, like those I worked out with pencil and paper. All those jagged, jitterbuggy ups and downs made no sense. Nor did the overall trend of unbounded growth in the ratio. How could you keep dividing and dividing, and wind up with bigger and bigger numbers?

At this point you may want to pause before reading on, and try to work out your own theory of where these zigzag numbers are coming from. If you need a hint, you can get a strong one—almost a spoiler—by looking up the sequence of numerators or the sequence of denominators in the Online Encyclopedia of Integer Sequences.


Here’s another hint. A small edit to the div! program completely transforms the output. Just flip the final clause, changing n // div!(n - 1) into div!(n - 1) // n.

div!(n)  =  n == 1 ? 1 : div!(n - 1) // n

Now the results look like this:

10-element Array{Real,1}:
  1                    
 1//2                  
 1//6                  
 1//24                 
 1//120                
 1//720                
 1//5040               
 1//40320              
 1//362880             
 1//3628800

This is the inverse factorial function we’ve already seen, the series of quotients generated when you march left to right through an ascending sequence of divisors \(1 \mathbin{/} 2 \mathbin{/} 3 \mathbin{/} \cdots \mathbin{/} n\).

It’s no surprise that flipping the final clause in the procedure alters the outcome. After all, we know that division is not commutative or associative. What’s not so easy to see is why the sequence of quotients generated by the original program takes that weird zigzag form. What mechanism is giving rise to those paired powers of 2 and the alternation of odd and even?

I have found that it’s easier to explain what’s going on in the zigzag sequence when I describe an iterative version of the procedure, rather than the recursive one. (This is an embarrassing admission for someone who has argued that recursive definitions are easier to reason about, but there you have it.) Here’s the program:

function div!_iter(n)
    q = 1
    for i in 1:n
        q = i // q
    end
    return q
end

I submit that this looping procedure is operationally identical to the recursive function, in the sense that if div!(n) and div!_iter(n) both return a result for some positive integer n, it will always be the same result. Here’s my evidence:

[div!(n) for n in 1:20]    [div!_iter(n) for n in 1:20]
            1                         1//1    
           2//1                       2//1    
           3//2                       3//2    
           8//3                       8//3    
          15//8                      15//8    
          16//5                      16//5    
          35//16                     35//16   
         128//35                    128//35   
         315//128                   315//128  
         256//63                    256//63   
         693//256                   693//256  
        1024//231                  1024//231  
        3003//1024                 3003//1024 
        2048//429                  2048//429  
        6435//2048                 6435//2048 
       32768//6435                32768//6435 
      109395//32768              109395//32768
       65536//12155               65536//12155
      230945//65536              230945//65536
      262144//46189              262144//46189

To understand the process that gives rise to these numbers, consider the successive values of the variables \(i\) and \(q\) each time the loop is executed. Initially, \(i\) and \(q\) are both set to \(1\); hence, after the first passage through the loop, the statement q = i // q gives \(q\) the value \(\frac{1}{1}\). Next time around, \(i = 2\) and \(q = \frac{1}{1}\), so \(q\)’s new value is \(\frac{2}{1}\). On the third iteration, \(i = 3\) and \(q = \frac{2}{1}\), yielding \(\frac{i}{q} \rightarrow \frac{3}{2}\). If this is still confusing, try thinking of \(\frac{i}{q}\) as \(i \times \frac{1}{q}\). The crucial observation is that on every passage through the loop, \(q\) is inverted, becoming \(\frac{1}{q}\).

If you unwind these operations, and look at the multiplications and divisions that go into each element of the series, a pattern emerges:

\[\frac{1}{1}, \quad \frac{2}{1}, \quad \frac{1 \cdot 3}{2}, \quad \frac{2 \cdot 4}{1 \cdot 3}, \quad \frac{1 \cdot 3 \cdot 5}{2 \cdot 4} \quad \frac{2 \cdot 4 \cdot 6}{1 \cdot 3 \cdot 5}\]

The general form is:

\[\frac{1 \cdot 3 \cdot 5 \cdot \cdots \cdot n}{2 \cdot 4 \cdot \cdots \cdot (n-1)} \quad (\text{odd } n) \qquad \frac{2 \cdot 4 \cdot 6 \cdot \cdots \cdot n}{1 \cdot 3 \cdot 5 \cdot \cdots \cdot (n-1)} \quad (\text{even } n).
\]


The functions \(1 \cdot 3 \cdot 5 \cdot \cdots \cdot n\) for odd \(n\) and \(2 \cdot 4 \cdot 6 \cdot \cdots \cdot n\) for even \(n\) have a name! They are known as double factorials, with the notation \(n!!\). Terrible terminology, no? Better to have named them “semi-factorials.” And if I didn’t know better, I would read \(n!!\) as “the factorial of the factorial.” The double factorial of n is defined as the product of n and all smaller positive integers of the same parity. Thus our peculiar sequence of zigzag quotients is simply \(\frac{n!!}{(n-1)!!}\).

A 2012 article by Henry W. Gould and Jocelyn Quaintance (behind a paywall, regrettably) surveys the applications of double factorials. They turn up more often than you might guess. In the middle of the 17th century John Wallis came up with this identity:

\[\frac{\pi}{2} = \frac{2 \cdot 2 \cdot 4 \cdot 4 \cdot 6 \cdot 6 \cdots}{1 \cdot 3 \cdot 3 \cdot 5 \cdot 5 \cdot 7 \cdots} = \lim_{n \rightarrow \infty} \frac{((2n)!!)^2}{(2n + 1)!!(2n - 1)!!}\]

An even weirder series, involving the cube of a quotient of double factorials, sums to \(\frac{2}{\pi}\). That one was discovered by (who else?) Srinivasa Ramanujan.

Gould and Quaintance also discuss the double factorial counterpart of binomial coefficients. The standard binomial coefficient is defined as:

\[\binom{n}{k} = \frac{n!}{k! (n-k)!}.\]

The double version is:

\[\left(\!\binom{n}{k}\!\right) = \frac{n!!}{k!! (n-k)!!}.\]

Note that our zigzag numbers fit this description and therefore qualify as double factorial binomial coefficients. Specifically, they are the numbers:

\[\left(\!\binom{n}{1}\!\right) = \left(\!\binom{n}{n - 1}\!\right) = \frac{n!!}{1!! (n-1)!!}.\]

The regular binomial \(\binom{n}{1}\) is not very interesting; it is simply equal to \(n\). But the doubled version \(\left(\!\binom{n}{1}\!\right)\), as we’ve seen, dances a livelier jig. And, unlike the single binomial, it is not always an integer. (The only integer values are \(1\) and \(2\).)

Seeing the zigzag numbers as ratios of double factorials explains quite a few of their properties, starting with the alternation of evens and odds. We can also see why all the even numbers in the sequence are powers of 2. Consider the case of \(n = 6\). The numerator of this fraction is \(2 \cdot 4 \cdot 6 = 48\), which acquires a factor of \(3\) from the \(6\). But the denominator is \(1 \cdot 3 \cdot 5 = 15\). The \(3\)s above and below cancel, leaving \(\frac{16}{5}\). Such cancelations will happen in every case. Whenever an odd factor \(m\) enters the even sequence, it must do so in the form \(2 \cdot m\), but at that point \(m\) itself must already be present in the odd sequence.


Is the sequence of zigzag numbers a reasonable answer to the question, “What happens when you divide instead of multiply in \(n!\)?” Or is the computer program that generates them just a buggy algorithm? My personal judgment is that \(\frac{1}{n!}\) is a more intuitive answer, but \(\frac{n!!}{(n - 1)!!}\) is more interesting.

Furthermore, the mere existence of the zigzag sequence broadens our horizons. As noted above, if you insist that the division algorithm must always chug along the list of \(n\) factors in order, at each stop dividing the number on the left by the number on the right, then there are only \(n\) possible outcomes, and they all look much alike. But the zigzag solution suggests wilder possibilities. We can formulate the task as follows. Take the set of factors \(\{1 \dots n\}\), select a subset, and invert all the elements of that subset; now multiply all the factors, both the inverted and the upright ones. If the inverted subset is empty, the result is the ordinary factorial \(n!\). If all of the factors are inverted, we get the inverse \(\frac{1}{n!}\). And if every second factor is inverted, starting with \(n - 1\), the result is an element of the zigzag sequence.

These are only a few among the many possible choices; in total there are \(2^n\) subsets of \(n\) items. For example, you might invert every number that is prime or a power of a prime \((2, 3, 4, 5, 7, 8, 9, 11, \dots)\). For small \(n\), the result jumps around but remains consistently less than \(1\):

Prime powers

If I were to continue this plot to larger \(n\), however, it would take off for the stratosphere. Prime powers get sparse farther out on the number line.


Here’s a question. We’ve seen factorial variants that go to zero as \(n\) goes to infinity, such as \(1/n!\). We’ve seen other variants grow without bound as \(n\) increases, including \(n!\) itself, and the zigzag numbers. Are there any versions of the factorial process that converge to a finite bound other than zero?

My first thought was this algorithm:

function greedy_balance(n)
    q = 1
    while n > 0
        q = q > 1 ? q /= n : q *= n
        n -= 1
    end
    return q
end

We loop through the integers from \(n\) down to \(1\), calculating the running product/quotient \(q\) as we go. At each step, if the current value of \(q\) is greater than \(1\), we divide by the next factor; otherwise, we multiply. This scheme implements a kind of feedback control or target-seeking behavior. If \(q\) gets too large, we reduce it; too small and we increase it. I conjectured that as \(n\) goes to infinity, \(q\) would settle into an ever-narrower range of values near \(1\).

Running the experiment gave me another surprise:

Greedy balance linear

That sawtooth wave is not quite what I expected. One minor peculiarity is that the curve is not symmetric around \(1\); the excursions above have higher amplitude than those below. But this distortion is more visual than mathematical. Because \(q\) is a ratio, the distance from \(1\) to \(10\) is the same as the distance from \(1\) to \(\frac{1}{10}\), but it doesn’t look that way on a linear scale. The remedy is to plot the log of the ratio:

Greedy balance

Now the graph is symmetric, or at least approximately so, centered on \(0\), which is the logarithm of \(1\). But a larger mystery remains. The sawtooth waveform is very regular, with a period of \(4\), and it shows no obvious signs of shrinking toward the expected limiting value of \(\log q = 0\). Numerical evidence suggests that as \(n\) goes to infinity the peaks of this curve converge on a value just above \(q = \frac{5}{3}\), and the troughs approach a value just below \(q = \frac{3}{5}\). (The corresponding base-\(10\) logarithms are roughly \(\pm0.222\). I have not worked out why this should be so. Perhaps someone will explain it to me.

The failure of this greedy algorithm doesn’t mean we can’t find a divisive factorial that converges to \(q = 1\). If we work with the logarithms of the factors, this procedure becomes an instance of a well-known compu­tational problem called the number partitioning problem. You are given a set of real numbers and asked to divide it into two sets whose sums are equal, or as close to equal as possible. It’s a certifiably hard problem, but it has also been called (PDF) “the easiest hard problem.”For any given \(n\), we might find that inverting some other subset of the factors gives a better approximation to \(n! = 1\). For small \(n\), we can solve the problem by brute force: Just look at all \(2^n\) subsets and pick the best one.

I have computed the optimal partitionings up to \(n = 30\), where there are a billion possibilities to choose from.

Optimum balance graph

The graph is clearly flatlining. You could use the same method to force convergence to any other value between \(0\) and \(n!\).

And thus we have yet another answer to the question in the tweet that launched this adventure. What happens when you divide instead of multiply in n!? Anything you want.

Daniel Lemire's blog: Science and Technology links (February 9th, 2019)

  1. Though deep learning has proven remarkably capable in many tasks like image classification, it is possible that the problems they are solving remarquably well are just simpler than we think:

    At its core our work shows that [neural networks] use the many weak statistical regularities present in natural images for classification and don’t make the jump towards object-level integration of image parts like humans.

    This challenges the view that deep learning is going to bring us much closer to human-level intelligence in the near future.

  2. Though we age, it is unclear how our bodies keep track of the time (assuming they do). Researchers claim that our blood cells could act as time keepers. When you transplant organs from a donor, they typically behave according to the age of the recipient. However, blood cells are an exception: they keep the same age as the donor. What would happen if we were to replace all blood cells in your body with younger or older ones?
  3. A tenth of all coal is used to make steel. This suggests that it might be harder than people expect to close coal mines and do away with fossil fuels entirely in the short or medium term.
  4. Elite powerlifters have suprising low testosterone (male homone) levels. This puts a dent in the theory that strong men have high testosterone levels.
  5. Chimpanzees learn to crack nuts faster than human beings. This challenges the model that human beings are cognitively superior.
  6. It seems that the male brain ages more rapidly than the female brain.
  7. Grant argues that vitamin D supplements reduce cancer rates, but that medicine is slow to accept it.
  8. Women prefer more masculine looking men in richer countries. I do not have any intuition as to why this might be.
  9. Geographers claim that the arrival of Europeans to America, and the subsequent reduction of population (due mostly to diseases) lead to a global cooling of worldwide temperatures. It seems highly speculative to me that there was any measurable effect.
  10. The New York Times has a piece of a billionnaire called Brutoco who says that “he spends much of his time flying around the world lecturing on climate change” and lives in a gorgeous villa surrounded by a golf course. There is no talk of his personal carbon footprint.

Charles Petzold: James Earl Jones Saves the Play

When I was 15 years old, my mother started letting me take the bus into New York City by myself. I would walk to the bus stop at the intersection of Route 9 and Ernston Road (at that time a traffic light), and after a 45-minute bus trip arrive at the Port Authority Bus Terminal. Usually I just went to bookstores but sometimes I saw a play, and that’s how I saw James Earl Jones onstage in The Great White Hope, and witnessed the great actor improvising a save when something unexpected threatened to blunt the play’s impact.

... more ...

Daniel Lemire's blog: Faster remainders when the divisor is a constant: beating compilers and libdivide

Not all instructions on modern processors cost the same. Additions and subtractions are cheaper than multiplications which are themselves cheaper than divisions. For this reason, compilers frequently replace division instructions by multiplications. Roughly speaking, it works in this manner. Suppose that you want to divide a variable n by a constant d. You have that n/d = n * (2N/d) / (2N). The division by a power of two (/ (2N)) can be implemented as a right shift if we are working with unsigned integers, which compiles to single instruction: that is possible because the underlying hardware uses a base 2. Thus if 2N/d has been precomputed, you can compute the division n/d as a multiplication and a shift. Of course, if d is not a power of two, 2N/d cannot be represented as an integer. Yet for N large enoughfootnote, we can approximate 2N/d by an integer and have the exact computation of the remainder for all possible n within a range. I believe that all optimizing C/C++ compilers know how to pull this trick and it is generally beneficial irrespective of the processor’s architecture.

The idea is not novel and goes back to at least 1973 (Jacobsohn). However, engineering matters because computer registers have finite number of bits, and multiplications can overflow. I believe that, historically, this was first introduced into a major compiler (the GNU GCC compiler) by Granlund and Montgomery (1994). While GNU GCC and the Go compiler still rely on the approach developed by Granlund and Montgomery, other compilers like LLVM’s clang use a slightly improved version described by Warren in his book Hacker’s Delight.

What if d is a constant, but not known to the compiler? Then you can use a library like libdivide. In some instances, libdivide can even be more efficient than compilers because it uses an approach introduced by Robison (2005) where we not only use multiplications and shifts, but also an addition to avoid arithmetic overflows.

Can we do better? It turns out that in some instances, we can beat both the compilers and a library like libdivide.

Everything I have described so far has to do with the computation of the quotient (n/d) but quite often, we are looking for the remainder (noted n % d). How do compilers compute the remainder? They first compute the quotient n/d and then they multiply it by the divisor, and subtract all of that from the original value (using the identity n % d = n - (n/d) * d).

Can we take a more direct route? We can.

Let us go back to the intuitive formula n/d = n * (2N/d) / (2N). Notice how we compute the multiplication and then drop the least significant N bits? It turns out that if, instead, we keep these least significant bits, and multiply them by the divisor, we get the remainder, directly without first computing the quotient.

The intuition is as follows. To divide by four, you might choose to multiply by 0.25 instead. Take 5 * 0.25, you get 1.25. The integer part (1) gives you the quotient, and the decimal part (0.25) is indicative of the remainder: multiply 0.25 by 4 and you get 1, which is the remainder. Not only is this more direct and potential useful in itself, it also gives us a way to check quickly whether the remainder is zero. That is, it gives us a way to check that we have an integer that is divisible by another: do x * 0.25, the decimal part is less than 0.25 if and only if x is a multiple of 4.

This approach was known to Jacobsohn in 1973, but as far as I can tell, he did not derive the mathematics. Vowels in 1994 worked it out for the case where the divisor is 10, but (to my knowledge), nobody worked out the general case. It has now been worked out in a paper to appear in Software: Practice and Experience called Faster Remainder by Direct Computation.

In concrete terms, here is the C code to compute the remainder of the division by some fixed divisor d:

uint32_t d = ...;// your divisor > 0

uint64_t c = UINT64_C(0xFFFFFFFFFFFFFFFF) / d + 1;

// fastmod computes (n mod d) given precomputed c
uint32_t fastmod(uint32_t n ) {
  uint64_t lowbits = c * n;
  return ((__uint128_t)lowbits * d) >> 64; 
}

The divisibility test is similar…

uint64_t c = 1 + UINT64_C(0xffffffffffffffff) / d;


// given precomputed c, checks whether n % d == 0
bool is_divisible(uint32_t n) {
  return n * c <= c - 1; 
}

To test it out, we did many things, but in one particular tests, we used a hashing function that depends on the computation of the remainder. We vary the divisor and compute many random values. In one instance, we make sure that the compiler cannot assume that the divisor is known (so that the division instruction is used), in another case we let the compiler do its work, and finally we plug in our function. On a recent Intel processor (Skylake), we beat state-of-the-art compilers (e.g., LLVM’s clang, GNU GCC).

The computation of the remainder is nice, but I really like better the divisibility test. Compilers generally don’t optimize divisibility tests very well. A line of code like (n % d) = 0 is typically compiled to the computation of the remainder ((n % d)) and a test to see whether it is zero. Granlund and Montgomery have a better approach if d is known ahead of time and it involves computing the inverse of an odd integer using Newton’s method. Our approach is simpler and faster (on all tested platforms) in our tests. It is a multiplication by a constant followed by a comparison of the result with said constant: it does not get much cheaper than that. It seems that compilers could easily apply such an approach.

We packaged the functions as part of a header-only library which works with all major C/C++ compilers (GNU GCC, LLVM’s clang, Visual Studio). We also published our benchmarks for research purposes.

I feel that the paper is short and to the point. There is some mathematics, but we worked hard so that it is as easy to understand as possible. And don’t skip the introduction! It tells a nice story.

The paper contains carefully crafted benchmarks, but I came up with a fun one for this blog post which I call “fizzbuzz”. Let us go through all integers in sequence and count how many are divisible by 3 and how many are divisible by 5. There are far more efficient ways to do that, but here is the programming 101 approach in C:

  for (uint32_t i = 0; i < N; i++) {
    if ((i % 3) == 0)
      count3 += 1;
    if ((i % 5) == 0)
      count5 += 1;
  }

Here is the version with our approach:

static inline bool is_divisible(uint32_t n, uint64_t M) {
  return n * M <= M - 1;
}

...


  uint64_t M3 = UINT64_C(0xFFFFFFFFFFFFFFFF) / 3 + 1;
  uint64_t M5 = UINT64_C(0xFFFFFFFFFFFFFFFF) / 5 + 1;
  for (uint32_t i = 0; i < N; i++) {
    if (is_divisible(i, M3))
      count3 += 1;
    if (is_divisible(i, M5))
      count5 += 1;
  }

Here is the number of CPU cycles spent on each integer checked (average):

Compiler 4.5 cycles per integer
Fast approach 1.9 cycles per integer

I make my benchmarking code available. For this test, I am using an Intel (skylake) processing and GCC 8.1.

Your results will vary. Our proposed approach may not always be faster. However, we can claim that some of time, it is advantageous.

Update: There is a Go library implementing this technique.

Further reading: Faster Remainder by Direct Computation: Applications to Compilers and Software Libraries, Software: Practice and Experience (to appear)

Footnote: What is N? If both the numerator n and the divisor d are 32-bit unsigned integers, then you can pick N=64. This is not the smallest possible value. The smallest possible value is given by Algorithm 2 in our paper and it involves a bit of mathematics (note: the notation in my blog post differs from the paper, N becomes F).

Follow-up blog post: More fun with fast remainders when the divisor is a constant (where I discuss finer points)

CreativeApplications.Net: Anti-Drawing Machine – Whimsical and imperfectly characteristic collaborator

Anti-Drawing Machine – Whimsical and imperfectly characteristic collaborator
Created by Soonho Kwon, Harsh Kedia and Akshat Prakash, Anti-Drawing Machine project explores possible alternatives of how we engage with robots today—instead of purely utilitarian and precise, Anti-Drawing Machine is a robot that can be whimsical and imperfectly characteristic.

Michael Geist: CRTC on OpenMedia’s Site Blocking Campaign: “Contributed to a Better Understanding of the Issues”

The CRTC released four cost awards yesterday arising from the Bell coalition’s proposal for a site blocking system. The Commission rejected the proposal last year on jurisdictional grounds and has now followed up with significant cost awards to public interest groups that participated in the process. The FairPlay coalition challenged the cost awards to OpenMedia and CIPPIC, arguing that its citizen engagement was “deliberately misleading and cannot represent responsible participation in the proceeding.” It also argued that the Public Interest Advocacy Centre’s participation was “irresponsible in nature” since it included arguments questioning the harm of piracy, which FairPlay maintained encouraged the Commission “to disregard the basic tenets of the Copyright Act.”

The CRTC soundly rejected these arguments, ordering the FairPlay coalition to pay over $130,000 in costs as part of four applications (OpenMedia/CIPPIC, PIAC, FRPC, UDC). The Commission’s analysis on the value of the OpenMedia/CIPPIC public campaign is particularly noteworthy given efforts by some commentators to question it:

the Commission considers that their respective public interest mandates and the online engagement campaign demonstrate that they represent the interests of Canadian Internet service subscribers. CIPPIC/OpenMedia have also satisfied the remaining criteria through their participation in the proceeding. In particular, the Commission considers that the online engagement campaign facilitated the broad and direct participation of thousands of Canadians in the proceeding. Importantly, a significant number of those who used the online engagement campaign to submit an intervention to the Commission also customized the text to include their personal views on the proposal put forward by FairPlay. This broad facilitation, as well as the opportunity for an individual response, created a diverse evidentiary record that contributed to a better understanding of the issues before the Commission. While FairPlay may consider certain content of the campaign to be misleading, the overall participation encouraged by the campaign contributed to a better understanding of the issues before the Commission and did not constitute irresponsible participation by CIPPIC/OpenMedia.

Further, the fact that CIPPIC/OpenMedia did not expand on the online engagement campaign evidence in their own intervention does not mean that they participated in the proceeding in an irresponsible way. Additional interpretive analysis of the evidence from the campaign within CIPPIC/OpenMedia’s intervention would have been preferable, both in assisting the Commission in analyzing the numerous individual interventions, and in making the direct link between the campaign and the content of CIPPIC/OpenMedia’s intervention. However, the failure to do so in this case was not irresponsible, and the clear link between the campaign and the content of the intervention on issues such as network neutrality ensured that the entirety of CIPPIC/OpenMedia’s participation contributed to a better understanding of the issues before the Commission.

The CRTC similarly rejected claims that PIAC’s participation was irresponsible:

PIAC’s copyright submissions, although novel, remained relevant and assisted the Commission in developing a better understanding of the matters that were considered. FairPlay’s application itself proposed a novel use of the Telecommunications Act, so the proceeding should have been expected to result in the advancement of unconventional or otherwise innovative arguments by interveners.

The CRTC decisions are welcome, confirming the value of public interest campaigns that encourage the public to participate in proceedings and ensure their voices are heard. While there has been considerable criticism of the Commission in recent months regarding its treatment of civil society, it deserves credit for thoroughly rejecting FairPlay’s attempt to avoid paying the costs associated with responding to its now-defeated proposal.

The post CRTC on OpenMedia’s Site Blocking Campaign: “Contributed to a Better Understanding of the Issues” appeared first on Michael Geist.

Michael Geist: Government Service Delivery in the Digital Age: My Appearance Before the Standing Committee on Access to Information, Ethics and Privacy

Last week, I appeared before the House of Commons Standing Committee on Access to Information, Privacy and Ethics as part of its study on government services and privacy. The discussion touched on a wide range of issues, including outdated privacy rules and the policy complexity of smart cities. I concluded by noting:

“we need rules that foster public confidence in government services by ensuring there are adequate safeguards, transparency and reporting mechanisms to give the public the information it needs about the status of their data, and appropriate levels of access so that the benefits of government services can be maximized. That is not new. What is new is that this needs to happen in an environment of changing technologies, global information flows, and an increasingly blurry line between public and private in service delivery.”

My full opening remarks are posted below.

Appearance before the House of Commons Standing Committee on Access to Information, Privacy and Ethics, January 29, 2019

Good afternoon. My name is Michael Geist.  I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Internet and E-commerce Law, and I am a member of the Centre for Law, Technology, and Society.

My areas of speciality include digital policy, intellectual property, and privacy.  I served for many years on the Privacy Commissioner of Canada’s External Advisory Board and I have been privileged to appear before multiple committees on privacy issues, including PIPEDA, Bill S-4, Bill C-13, the Privacy Act, and this committee’s review of social and media privacy.

I am also the chair of Waterfront Toronto’s Digital Strategy Advisory Panel, which is actively engaged in the smart city process in Toronto involving Sidewalk Labs.

As always, I appear in a personal capacity as an independent academic representing only my own views.

This committee’s study on government services and privacy provides an exceptional opportunity to tackle many of the challenges surrounding government services, privacy and technology today. Indeed, I believe that what makes this issue so compelling is that it represents the confluence of public sector privacy law, private sector privacy law, data governance, and emerging technologies.

The Sidewalk Labs issue is a case in point. While it is not about federal government services – it is a municipal project – the debates are fundamentally about the role of the private sector in the delivery of government services, the collection of public data, and the oversight or engagement of governments at all levels. For example, the applicable law to the project remains somewhat uncertain: is it PIPEDA?  Provincial privacy law? Both? How do we grapple with new challenges when even determining the applicable law is not a straightforward issue.

My core message today is that looking at government services and privacy requires more than just a narrow examination of what the federal government is doing to deliver services, assessing the privacy implications, and identifying what rules or regulations could be amended or introduced to better facilitate services that meet the needs of Canadians and provide them with the privacy and security safeguards that they rightly expect.

I believe that the government services of tomorrow will engage a far more complex ecosystem that involves not just the conventional questions surrounding the suitability of the Privacy Act in the digital age. Rather, given the overlap between public and private, federal, provincial, and local, domestic and foreign, we need a more holistic assessment that recognizes that service delivery in the digital age necessarily implicates more than just one law.

Those services will involve questions about the sharing of information across government, the location of data storage, the transfer of information across borders, and the use of information by governments and the private sector for data analytics, artificial intelligence, and other uses.

In other words, we are talking about the Privacy Act, PIPEDA, trade agreements that feature data localization and data transfer rules, the GDPR, international treaties such as the forthcoming work at the WTO on e-commerce, community data trusts, open government policies, crown copyright, private sector standards, and emerging technologies.

It is a complex, challenging, and exciting space. I’d be happy to touch on any of these issues during questions, but in the interests of time, I will limit my slightly deeper dive to the Privacy Act, which as you know is the foundational statute for government collection and use of personal information.

There have been multiple studies and successive federal privacy commissioners who have tried to sound the alarm on the legislation that is viewed as outdated and inadequate. Canadians understandably expect that the privacy rules that govern the collection, use, and disclosure of their personal information by the federal government will meet the highest standards. For decades, we have failed to meet that standard. As pressure mounts for new uses of data collected by the federal government, the necessity of a law fit-for-purpose increases.

I’d like to point to three issues in particular with the federal rules governing privacy and their implications:

i.    Reporting Power

The failure to engage in meaningful Privacy Act reform may be attributable in part to the lack of public awareness of the law and its importance.  The Privacy Commissioner has played an important role in educating the public about PIPEDA and broader privacy concerns.  The Privacy Act desperately needs to include a similar mandate for public education and research.

Moreover, the notion of limiting reporting to an annual report reflects a by-gone era.  In our current 24 hour, social media driven news cycle, restrictions on the ability to disseminate information – particularly information that touches on the privacy of millions of Canadians – cannot be permitted to remain out of the public eye until an annual report can be tabled.  Where the Commissioner deems it in the public interest, the Office must surely have the power to disclose in a timely manner.

ii.    Limiting Collection

The committee has heard repeatedly that the Privacy Act falls woefully short in meeting the standards of a modern privacy act.  Indeed, at a time when government is expected to be the model, it instead requires less of itself than it does of the private sector. A key reform in my view is the limiting collection principle. A hallmark of private sector privacy law, the government should similarly be subject to collecting only that information that is strictly necessary for its programs and activities.

This is particularly relevant with respect to emerging technologies and artificial intelligence. The Office of the Privacy Commissioner of Canada recently reported on the use of data analytics and AI in delivering certain programs. For example, it cited:

•    the Immigration, Refugees and Citizenship Canada (IRCC) Temporary Resident Visa Predictive Analytics Pilot Project which uses predictive analytics and automated decision-making as part of the visa approval processes

•    the CBSA’s use of advanced analytics in its National Targeting Program to evaluate the passenger data of all air travelers arriving in Canada, as well as its planned expanded use of analytics in risk assessing individuals;

•    the Canada Revenue Agency’s (CRA’s) increasing use of advanced analytics to sort, categorize and match taxpayer information against perceived indicators of risk of fraud and non-compliance.

These technologies offer great potential, but they also may also encourage greater collection, sharing and linkage of data. This requires robust privacy impact assessments and considerations of the privacy cost-benefits.

iii.    Data Breaches and Transparency

Breach disclosure legislation has become commonplace in the private sector privacy world and it has long been clear that similar disclosure requirements are needed within the Privacy Act. Despite its importance, it took more than a decade for Canada to pass and implement data breach rules for the private sector. As long as that took, we are still waiting for equivalent legislation at the federal government level.

As this committee knows, the data indicates that hundreds of thousands of Canadians have been affected by breaches of their private information. The rate of reporting these breaches remains low. If the public is to trust the safety and security of their personal information, there is a clear need for mandated breach disclosure rules within government.

Closely related to the issue of data breaches are broader rules and policies around transparency.  In a sense, the policy objective is to foster public confidence in the collection, use, and disclosure of their information by adopting a transparent, open approach about policies, safeguards, and instances where we fall short.

Recent emphasis has been on private sector transparency reporting.  Large Internet companies such as Google and Twitter have released transparency reports and they have been joined by some of Canada’s leading communications companies such as Rogers and Telus.  Remarkably, there are still some holdouts – notably Bell – that do not release transparency reports.

However, these reports represent just one side of the picture. Public awareness of the world of requests and disclosures would be far more informed if government also released transparency reports.  These need not implicate active investigations, but there is little reason that government not be subject to the same expectations on transparency as the private sector.

Ultimately, we need rules that foster public confidence in government services by ensuring there are adequate safeguards, transparency and reporting mechanisms to give the public the information it needs about the status of their data, and appropriate levels of access so that the benefits of government services can be maximized. That is not new.  What is new is that this needs to happen in an environment of changing technologies, global information flows, and an increasingly blurry line between public and private in service delivery.

I look forward to your questions.

The post Government Service Delivery in the Digital Age: My Appearance Before the Standing Committee on Access to Information, Ethics and Privacy appeared first on Michael Geist.

Planet Lisp: Wimpie Nortje: Be careful with Ironclad in multi-threaded applications.

Update

Thanks to eadmund and glv2 the issue described in this article is now fixed and documented clearly. The fixed version of Ironclad should find its way into the Quicklisp release soon.

Note that there are other objects in Ironclad which are still not thread-safe. Refer to the documentation on how to handle them.

Whenever you write a program that uses cryptographic tools you will use cryptographically secure random numbers. Since most people never write security related software they may be surprised to learn how often they are in this situation.

Cryptographically secure pseudo random number generators (PRNG) is a core building block in cryptographic algorithms which include things like hashing algorithms and generation algorithms for random identifiers with low probably of repetition. The two main uses are to securely store hashed passwords (e.g. PBKDF2, bcrypt, scrypt) and to generate random UUIDs. Most web applications with user accounts fall into this category and many other non-web software too.

If your program falls into this group you are almost certainly using Ironclad. The library tries hard to be easy to use even for those without cryptography knowledge. To that end it uses a global PRNG instance with a sensible setting for each particular target OS and expects that most users should never bother to learn about PRNGs.

The Ironclad documentation is clear, don't change the default PRNG! First "You should very rarely need to call make-prng; the default OS-provided PRNG should be appropriate in nearly all cases." And then "You should only use the Fortuna PRNG if your OS does not provide a sufficiently-good PRNG. If you use a Unix or Unix-like OS (e.g. Linux), macOS or Windows, it does."

These two quotes are sufficient to discourage any idle experimentation with PRNG settings, especially if you only want to get the password hashed and move on.

The ease of use comes to a sudden stop if you try to use PRNGs in a threaded application on CCL. The first thread works fine but all others raise error conditions about streams being private to threads. On SBCL the problem is much worse. No error is signaled and everything appears to work but the PRNG frequently returns repeated "random" numbers.

These repeated numbers may never be detected if they are only used for password hashing. If however you use random UUIDs you may from time-to-time get duplicates which will cause havoc in any system expecting objects to have unique identifiers. It will also be extremely difficult to find the cause of the duplicate IDs.

How often do people write multi-threaded CL programs? Very often. By default Hunchentoot handles each HTTP request in its own thread.

The cause of this problem is that Ironclad's default PRNG, :OS, is not implemented to be thread safe. This is the case on Unix where it is a stream to /dev/urandom. I have not checked the thread-safety on Windows where it uses CryptGenRandom.

Solutions

There exists a bug report for Ironclad about the issue but it won't be fixed.

Two options to work around the issue are:

  1. Change the global *PRNG* to Fortuna

     (setf ironclad:*PRNG* (ironclad:make-prng :fortuna))
    
    Advantage:
    It is quick to implement and it appears to be thread safe.
    Disadvantage:
    :FORTUNA is much slower than :OS
  2. Use a thread-local instance of :OS

     (make-thread
      (let ((ironclad:*PRNG* (ironclad:make-prng :os)))
        (use-prng)))
    
    Advantage:
    :OS is significantly faster that :FORTUNA. It is also Ironclad's recommended PRNG.
    Disadvantages:
    When the PRNG is only initialized where needed it is easy to miss places where it should be initialized..
    When the PRNG is initialized in every thread it causes unnecessary processing overhead in threads where it is not used.

Summary

It is not safe to use Irondclad dependent libraries in multi-threaded programs with the default PRNG instance. On SBCL it may appear to work but you will eventually run into hard-to-debug problems with duplicate "random" numbers. On CCL the situation is better because it will signal an error.

Volatile and Decentralized: Over-the-Air Arduino firmware updates using Firebase, Part 1

Just a reminder that I'm blogging on Medium these days.

I just posted another article, this time on using Firebase to support over-the-air firmware updates for Arduino projects. Check it out!

things magazine: Rising high

Art and science. Landscapes by Cornelia Fitzroy / landscapes by Michael Reisch / the drawing archive at the Norman Foster Foundation / sort of related, I wouldn’t build my dream home in joyless, moralistic Scrutopia / A Peckham Poem, by … Continue reading

OCaml Weekly News: OCaml Weekly News, 05 Feb 2019

  1. Beta release of Logarion 0.6
  2. Yojson 1.6.0
  3. Dynlink works in native mode but not in bytecode?
  4. Second stage of the jbuilder deprecation
  5. Caqti 1.0.0
  6. PSA: cstruct 3.4.0 removes old ocamlfind subpackage aliases
  7. Other OCaml News

MattCha's Blog: 2019 Puerh Tea Predictions & Checking Past Predictions


One of my favorite posts of the year is Cwyn N’s yearly purerh predictions.  Last year I had rough copied some predictions of my own and was going to post that but, like a lot of rough copies I have on my PC, I never finished or published that post.  This year I thought it would be interesting to go back to some previous year's predictions and see if they came true…

2018 The Year of Puerh Storage

Cwyn N – Storage Prediction 2018- The biggest single prediction of 2018.  She pointed out that a lot was going on out of pubic view in private chats and invite only Slack threads and that this just basically came into full public view this year.  But did it ever explode.

James- Big Storage Solutions- James of TeaDB isn’t really a predictions type of guy but he did hint that larger storage setups are going to be the next big thing in storage.  Then he teased us all with the Euro Cave.

Me- I embraced the whole storage topic and thought I’d run with that by offering a challenge to the pumidor-centric view of Western puerh storage/aging in this article.

Marco- That article of mine was followed by detailed comparison reporting by Marco on Mylar bag hotbox storage (that I had no idea about).  Together they both (let's be honest, its mainly Marco's article) ushered in a shift in Western puerh storage in 2018 to the potential for more heated and sealed options looking forward.

My Past Predictions are Pretty Good

Rise in shu puerh popularity- predicted its popularity increase way back here with the explanation of why it will happen.  Shu puerh seems to have exploded this year.  Yunnan Sourcing seems to have grabbed a large section of that market by being ahead of the other vendors while offering some really nice pesticide free product at low prices.  The marketing of this product has been bolstered by many recent reviews of Yunnan Sourcing Shu puerh on TeaDB.

Global Uncertainty/ Currency Issues- Predicted global uncertainty and currency issues influencing the puerh world in comments here.  Currency was cited by Scott of Yunnan Sourcing as one of the reasons for puerh price raises this past year in wake of a Chinese/US trade war.  Likely, another reason the Essence of Tea distanced themselves from trading in Great Britain Pounds (GBP) and switched to trading in USD this year in light of what is going on with Brexit.  The United States buyers are less sensitive to this because they don’t have to exchange their currency when they purchase puerh from Western vendors but the rest of us pay closer attention.  It many have even had the effect of keeping 2018 prices from rising and may even result in some price corrections.

Better Black Friday Sales- See my last minute prediction here and follow up blog postconfirming the resulting sales burst.  Considering the enormous sales generated this year, I predict that 2019 Black Friday will be even bigger. 

Why? It is cyclic phenomena that is taking place here people.  Big sales = more puerh buyers holding off on yearly purchases and waiting to spend the majority of their puerh budget on Black Friday where they can get the most from their money = sales become larger to entice spending in a very small competitive window= more and more people hold off puerh buying until Black Friday = competition is more fierce so vendors have to offer better sales. 

You know who started this all right? I give Paul of white2tea credit for starting it all here by the way. Thanks.

Qing Bing as Response to Xiao Binging of Puerh Industry- This recent article covers the issue in detail.  Basically, I called out the industry in this post here which cited that someone has to press a huge cake to make it right.  So Paul of white2tea pressed a few huge puerh cakes that he gave away in a sweepstakes marketing promotion in response to the criticism.  Awesome.  (see Below)

My 2019 Puerh Predictions

Shu will continue to be big and will grow. Western vendors will continue to find interesting ways to market shu puerh.  This includes more premium options as well as more unique options like bamboo, minis, coins, bricks, chocobars, waffles, Snap Chat ghost shape, ect.  White2tea will probably go the route of interesting marketing and Yunnan Sourcing the route of more premium offerings.

Creative blends will be used to help keep sheng prices from rising too much.  The Essence of Tea was spot on with their blending this year as a method to keep prices more reasonable while still offering a unique sheng puerh product.  They did this in three clever and different offering this past year.  First, by offering this 2018 Spring Essence of Tea Piercing the Illusion wild tea and sheng puerh blend that was Qi focused.  Second, by offering this 2018 Spring Essence of Tea Gua Feng Zhai which used a small amount of the huang pian (Yellow matured leaves) in the harvest.  Third, by offering this 2018 Essence of Tea Yiwu a whole year’s harvest (Spring and Autumn combined), single family blend.  I see more vendors offering whole year’s harvest blends, cakes which keep in the huang pian, as well as wild tea and sheng puerh blends in the future.

More Famous puerh producing areas/ forests in the Bulang/ Bada are discovered.  This was one of my unpublished predictions of last year.  Just look at a map and tell me that there isn’t tones of undiscovered quality puerh locations there, in the forest somewhere.  Who will be the first to find these areas?  I don’t know but in 2008 I predicted the rise of boarder tea (it was actually teamster Kim’s prediction) on this blog and it has become slow to become popular but it is now a thing.

Even Bigger Black Friday Promotions (see above)

Specific Western Puerh Vendor Predictions

Paul of white2teawill be the first Western puerh vendor to press a puerh melon (we all want the melon, don’t we?) and he will continue to expand his offerings of minis/ mini tongs.  He will be the first Western puerh vendor to offer a 500g or larger bing for sale on his site.

Scott of Yunnan Sourcing will offer more sheng blends including higher quality sheng puerh blends- a mid-priced and a high-priced blend to complement the popular lower price point Impression blend.  He is becoming quite the blender these days.

David and Yingxi of the Essence of Tea will offer some sort of promotion/give away involving some interesting antique tea even though the idea of offering a promotion such is this is counter to their hands off approach to marketing.  They might even press their first shu puerh cake… maybe.

Oh those crazy predictions… let’s see if any come true.
 

Peace

MattCha's Blog: The Best of the Cheapest: 2017 Yunnan Sourcing Impression & Scott’s Blending Skills


In the Western puerh scene white2tea’s Paul (Twodog) is known for his blends, certainly they can be quite delicious.  Then along came 2017... 2017 was a watermark year for Yunnan Sourcing puerh blends.  In the lest two years they have developed a good reputation for not only these two famous yet very inexpensive blends, but some of their other lab tested, pesticide-free, shut blends as well.  The 2017 Rooster King Shu, sold out quick- the quality really surprised me.  The other, this 2017 Yunnan Sourcing Impression ($27.00 for 357g cake or $0.08/g) has been very popular since its Autumn 2017 release (Edit: this tea was later found out to be released in Spring 2018).

It seemed the hype for this cake reached a climax at Black Friday this year with a few great articles featuring this full 357g sheng blend.  I don’t think I could add any more than this post on the topic really.  I wonder how many of these guys flew off the shelves on Black Friday?  I’ll bet that this was the single biggest seller this Black Friday.  I have been eyeing them for quite some time so ended up buying a handful of them to make it worth the shipping.

Many have championed this 2017 Yunnan Sourcing Impression as Scott of Yunnan Sourcing’s best sheng puerh blend to date.  I had pretty much sampled the whole Yunnan Sourcing Brand puerh selection from 2009-2011 and some of the 2012 but have never tried any of Scott’s sheng blends before, this will be a first for me...

Dry leaves are scented with a very vibrant tropical fruit odour over dry woody layers, peat, and moss.  The quality is immediately apparent in the vibrancy here.

First infusion starts with a soft fruity lemon and pineapple note, it transitions to a swelling creamy sweet taste.  There is metallic tastes as slight wood in the approach to the long creamy aftertaste.  The mouthfeel is sticky in the mouth.

The second is more sweet and full.  The initial taste is of stones and sweet vegetables, there is a lemon note in there as well.  The taste has a nice cool finish in a stimulating throat feeling.  There is a honeydew melon flavor lingering in the throat.  Lots of interesting notes playing out here.  Soft bitterness, slight sour, soft astringency in throat.  Creamy sweet with fruity sweet and even a vegetable sweetness.

The third infusion starts a touch spicy with sweet yam and creamy sweet notes followed by a metallic taste with makes the mouth sticky and nicely stimulated the throat.  The taste is long and creamy and almost cucumber in the aftertaste.  There is a lot going on here.  The cheeks are sticky as are the lips, and the mid throat opens to embrace the creamy and fresh aftertaste that also has notes of metal, yams, and even suggestions of wood.  The Qi makes my face flush and a heat sensation emerge. I feel it in the eyes which seem heavy.  I feel simultaneously sleepy as well as relaxed.

The fourth infusion has a juicy sweet taste to in now initially then a nice creamy sweeping aftertaste.  The mouthfeel is sticky and dense.

The fifth infusion has a nice burst of fruit but mainly creamy sweetness initially then it slowly rides into a long sweet creamy aftertaste in a full sticky mouhtfeeling.  The mouthfeel has a cotton like feel in the mouth and throat.  The creamy sweet taste is long with a touch of faint cooling.

The sixth infusion has a juicy fruity vibe, creamy sweet, softer sticky mouthfeel.  This tea is fading a bit here.  The mouthfeeling is nice and creamy fruity taste is fresh and vibrant in the mouth.

The seventh infusion has lots of those creamy sweet notes but the sweetness has a simple and very pure and fresh feeling to it.  You know that tropical sweetness, banana, pineapple maybe.  It’s yummy and the aftertastes take it along nicely.  Almost like an artificial banana candy taste, I like that.

The eighth I add 10 seconds to the flash infusion add it pushes out a bit more complexity in the flavor.  Green beans, banana, yam, moderate sticky/ dryness.  More of a vegetable fruit sweetness.

The ninth infusion I add 15 sec to flash and get a slightly chalky, dry wood, and creamy taste with vegetables underneath.  This infusion is a touch more bitter, it makes the sweet tight aftertaste pop but it is overshadowed by the woody/biter taste.  There is a butterscotch taste in the aftertaste.

The tenth infusion I add 20 seconds to flash infusion and get a thicker, richer taste of woods and beans with sweeter tastes retuning for a pop in the aftertaste under quilted bitter woods.  There is not too much complex taste in these leaves in the late infusions and the stamina of these leaves is not the greatest.  The sweet taste sure pops with that bitter astringency but then goes back to bitter quite quickly here.  The qi is pretty strong heavy, dopy, make me feel sleepy type of qi sensation.

The eleventh I add 25 seconds to the flash infusion and get a really dark chocolate delicious tasting thing with fruity notes in the distance and bittersweet coco up front.  This is yummy and right when I think about throwing in the towel with this one.  It is a Laoman E type of note even though you can be sure there is none of that in here.  The body qi of this blend is really harmonious and this puerh could easily be consumed now because it lacks a certain harshness in the body.  It actually makes the body feel warm and soothed, I like that.

I go for a 12th at 30 more seconds than flash all of the sweet taste is pretty much gone here.  Nothing but bitter left.  So, I end the session.  I steep this tea overnight and get a rich broth of flavors the next morning.

The flavours in the first few infusions are the best of the cheapest- complex and interesting.  The Qi is also the strongest so far- is offers a big relaxing dopy effect, if that’s your thing.  But the stamina is weaker than the other cheap budget puerh I have sampled.  I felt I had to add more time to infusions much earlier and the tea ends much quicker than the other budget options.  To me, stamina is really important because I really steep my puerh to the very end.  A tea that finishes at 12 infusions vs 20 makes a big difference in value.

This 2017 Yunnan Sourcing Impression is the same actual price per gram as 2018 white2tea Snoozefest ($0.08/g) .  They are actually similar in many ways- the most obvious being that they are both multi area Autumnal puerh blends.  To compare the two I would say that mouthfeel is better with Snoozefest by a little, flavor snoozefest is more high notes but Impression has more depth of flavor and more interesting flavor by a bit.  The impression also has a deeper feeling tea liquor- there is a certain thickness in the tea liquor that is the biggest flaw in the 2018 Snoozefest.  The stamina of Snoozefest is better as the late infusions are a very pleasant and enjoyable sweetness where Impression is more bitter in the end and peters out faster. I like Snoozefest’s tighter compression as well.  The Impression is lab tested pesticide free and the Snoozefest doesn't feel as pure to me.  This 2017 Yunnan Sourcing Impression is the winner out of these similar priced things, for sure.

All in all this 2017 Impression is probably the best for how cheap it is, I don’t imagine finding anything cheaper and this enjoyable.  However, with this said, I would gladly pay double and get 3x more value i.e the 2018 white2tea Splendid.  There are probably no fresh puerh in this lower price range that taste as good.  Also, I still tend to go for 2018 Essence of Tea Bamboo Spring for drink now sheng puerh consumption over any of these budget options.

You’re paying almost twice as much for the 2017 Yunnan Sourcing Impression but the winner is still 2018 white2tea Splendid for the best of the cheapest sheng puerh.

Rats!  I just checked the website when I was adding the hyperlinks to this post and found out it has just sold out.  This exercise of finding the best of the cheapest feels a bit silly if we are just comparing Sold Out puerh, doesn't it?  Conveniently, the 2018 Yunnan Sourcing Impression was just released with a beautiful koi fish wrapper design.  I thought the 2017 wrapper was stunning!  I wonder if it is comparable in quality to this, now famous 2017 Impression?

But wait, I have a late entry in the quest for the Best of the Cheapest fresh young sheng puerh…
Dogleg Sean's (Dead Leaves Tea Club) Tasting Notes

Char's (Oblong Owl) Tasting Notes

John B's (Tea in the ancient world) Tasting Notes

Steepster Tasting Notes

Peace

Trivium: 04feb2019

Volatile and Decentralized: I'm blogging on Medium!

Hey folks! If you're reading this blog, you may be in the wrong place.

I've decided to try out Medium as a new blogging platform.

Check out my Medium blog here!

So far, I have one article posted: Using Firebase to Control your Arduino Project over the Web. Hopefully more to come soon.


OUR VALUED CUSTOMERS: While discussing the SUPER BOWL...

Planet Lisp: Quicklisp news: February 2019 Quicklisp dist update now available

New projects:
  • async-process — asynchronous process execution for common lisp — MIT
  • atomics — Portability layer for atomic operations like compare-and-swap (CAS). — Artistic
  • game-math — Math for game development. — MIT
  • generic-cl — Standard Common Lisp functions implemented using generic functions. — MIT
  • simplified-types — Simplification of Common Lisp type specifiers. — MIT
  • sn.man — stub man launcher.it should be a man parser. — mit
Updated projectsagutilalso-alsaantikaprilcerberuschipzchronicitycl+sslcl-allcl-asynccl-collidercl-dbicl-embcl-environmentscl-fluent-loggercl-glfw3cl-json-pointercl-lascl-marklesscl-patternscl-readlinecl-rulescl-satcl-sat.glucosecl-sat.minisatcl-sdl2-imagecl-syslogcl-tiledcl-whoclackcloser-mopclsscommonqtcovercroatoandexadoreasy-audioeasy-bindeazy-projecteruditefast-websocketgendlglsl-toolkitgolden-utilsgraphjonathanjp-numeralkenzolichat-tcp-serverlistopialiterate-lisplocal-timeltkmcclimnodguioverlordpetalisppetripgloaderphoe-toolboxpngloadpostmodernqmyndqt-libsqtoolsqtools-uiquery-fsremote-jsreplicrpcqs-xml-rpcsafety-paramssc-extensionsserapeumshadowshould-testslystatic-dispatchstumpwmsucletime-intervaltriviatrivial-clipboardtrivial-utilitiestype-rutilityvernacularwith-c-syntaxwuwei.

To get this update, use (ql:update-dist "quicklisp")

Enjoy!

The Geomblog: More FAT* blogging

Session 3: Representation and Profiling

Session 4: Fairness methods. 

things magazine: See through

This and that. Glass art by Rachel Goswell (Slowdive) / compare and contrast with the champagne and Opal Fruit lifestyle revealed in this an oral history of Mark E.Smith / new from Morton Valence, Bob and Veronica’s Great Escape / … Continue reading

Tea Masters: Chinese New Year vacation

Next week, the Chinese New Year of the Pig will start! In a few hours, I'll head south for the week long holiday. I'll be back on February 12th.
Mes vacances du Nouvel An chinois ont débuté! Dans quelques heures je vais me rendre dans le sud de l'ile, puis à Taichung pour fêter mon animal chinois, le cochon! Je reprends le travail l'expédition de vos commandes le 12 février!

Planet Lisp: Didier Verna: Final call for papers: ELS 2019, 12th European Lisp Sympoiusm

		ELS'19 - 12th European Lisp Symposium

			 Hotel Bristol Palace
			    Genova, Italy

			    April 1-2 2019

		   In cooperation with: ACM SIGPLAN
		In co-location with <Programming> 2019
		  Sponsored by EPITA and Franz Inc.

	       http://www.european-lisp-symposium.org/

Recent news:
- Submission deadline extended to Friday February 8.
- Keynote abstracts now available.
- <Programming> registration now open:
  https://2019.programming-conference.org/attending/Registration
- Student refund program after the conference.


The purpose of the European Lisp Symposium is to provide a forum for
the discussion and dissemination of all aspects of design,
implementation and application of any of the Lisp and Lisp-inspired
dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP,
Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We
encourage everyone interested in Lisp to participate.

The 12th European Lisp Symposium invites high quality papers about
novel research results, insights and lessons learned from practical
applications and educational perspectives. We also encourage
submissions about known ideas as long as they are presented in a new
setting and/or in a highly elegant way.

Topics include but are not limited to:

- Context-, aspect-, domain-oriented and generative programming
- Macro-, reflective-, meta- and/or rule-based development approaches
- Language design and implementation
- Language integration, inter-operation and deployment
- Development methodologies, support and environments
- Educational approaches and perspectives
- Experience reports and case studies

We invite submissions in the following forms:

  Papers: Technical papers of up to 8 pages that describe original
    results or explain known ideas in new and elegant ways.

  Demonstrations: Abstracts of up to 2 pages for demonstrations of
    tools, libraries, and applications.

  Tutorials: Abstracts of up to 4 pages for in-depth presentations
    about topics of special interest for at least 90 minutes and up to
    180 minutes.

  The symposium will also provide slots for lightning talks, to be
  registered on-site every day.

All submissions should be formatted following the ACM SIGS guidelines
and include ACM Computing Classification System 2012 concepts and
terms. Submissions should be uploaded to Easy Chair, at the following
address: https://www.easychair.org/conferences/?conf=els2019

Note: to help us with the review process please indicate the type of
submission by entering either "paper", "demo", or "tutorial" in the
Keywords field.


Important dates:
 -    08 Feb 2019 Submission deadline (*** extended! ***)
 -    01 Mar 2019 Notification of acceptance
 -    18 Mar 2019 Final papers due
 - 01-02 Apr 2019 Symposium

Programme chair:
  Nicolas Neuss, FAU Erlangen-Nürnberg, Germany

Programme committee:
  Marco Antoniotti, Universita Milano Bicocca, Italy
  Marc Battyani, FractalConcept, France
  Pascal Costanza, IMEC, ExaScience Life Lab, Leuven, Belgium
  Leonie Dreschler-Fischer, University of Hamburg, Germany
  R. Matthew Emerson, thoughtstuff LLC, USA
  Marco Heisig, FAU, Erlangen-Nuremberg, Germany
  Charlotte Herzeel, IMEC, ExaScience Life Lab, Leuven, Belgium
  Pierre R. Mai, PMSF IT Consulting, Germany
  Breanndán Ó Nualláin, University of Amsterdam, Netherlands
  François-René Rideau, Google, USA
  Alberto Riva, Unversity of Florida, USA
  Alessio Stalla, ManyDesigns Srl, Italy
  Patrick Krusenotto, Deutsche Welle, Germany
  Philipp Marek, Austria
  Sacha Chua, Living an Awesome Life, Canada

Search Keywords:

#els2019, ELS 2019, ELS '19, European Lisp Symposium 2019,
European Lisp Symposium '19, 12th ELS, 12th European Lisp Symposium,
European Lisp Conference 2019, European Lisp Conference '19

Michael Geist: The Real Over-the-Top: CBC President Likens Netflix to Cultural Imperialism Such As the British in India or French in Africa

CBC President Catherine Tait appeared on a panel of Canadian media leaders earlier today at the Prime Time in Ottawa conference devoted to “a look ahead.” After cutting off the Netflix representative at one point and complaining that his comments were running too long, Tait concluded with a stunning and wholly inappropriate analogy to characterize the impact of Netflix in Canada:

I was thinking of the British Empire and how if you were there and you were the viceroy of India you would feel that you were doing only good for the people of India. Or similarly, if you were in French Africa, you would think I’m educating them, I’m bringing their resources to the world, and I’m helping them. There was a time where cultural imperialism was absolutely accepted and, in fact, if you were a history student you would be proud of the contribution that these great empires gave.

I would say we are at the beginning of a new empire and just as it is probably the most exciting time in terms of screened entertainment, that I certainly in my career that I’ve ever experienced in terms of quality. When I watched “My Brilliant Friend” I was so moved to see a fantastic Italian language show with an Italian dialect. So unbelievable to be able to experience this cultural sharing. So for this we are very grateful to Netflix. However, fast forward, to what happens after imperialism and the damage that can do to local communities. So all I would say is let us be mindful of how it is we as Canadians respond to global companies coming into our country.

While it may be tempting to dismiss the comments as a mere error of judgement (Tait started the comment by noting she was “going off-script”), the reality is the CBC has become exceptionally hostile toward the Internet and Internet services. It was a prominent supporter of the website blocking proposal that was rejected last year by the CRTC, making it an outlier among public broadcasters that have typically worked to counter site blocking. In fact, an access-to-information request I filed revealed that CBC executives admitted that “this really isn’t our fight and it will cost us.”

However, the CBC doubled-down on its position in its submission to the Broadcast and Telecommunications Legislative Review panel, calling for an expansion of site blocking to also include “blocking of websites that disseminate manipulated content, including news.” The proposal confirms fears that a site blocking system would quickly expand beyond copyright.

The CBC submission also calls for the establishment a new Internet taxes, applicable to both broadband and wireless services to fund Canadian programming. It envisions a full Internet regulation structure with the CRTC empowered to licence and regulate online services to contribute to the discoverability of Cancon, including regulations on “digital media undertakings to ensure the promotion and discoverability of Canadian content.” It would also like to see requirements that online services provide Canadian rightsholders with data on how their content is used.

Tait tweeted out a portion of her cultural imperialism remark – “we are at the beginning of a new empire. Let’s be mindful when responding to global companies coming to Canada – confirming that this was no lapse. Rather, it represents a shocking lack of historical awareness and a regulatory mindset that is completely out-of-touch with millions of Canadians who both pay for the CBC through their tax dollars and subscribe to Netflix.

The post The Real Over-the-Top: CBC President Likens Netflix to Cultural Imperialism Such As the British in India or French in Africa appeared first on Michael Geist.

bit-player: A Room with a View

Greene St station overview 900px rectified 6314

On my visit to Baltimore for the Joint Mathematics Meetings a couple of weeks ago, I managed to score a hotel room with a spectacular scenic view. My seventh-floor perch overlooked the Greene Street substation of the Baltimore Gas and Electric Company, just around the corner from the Camden Yards baseball stadium.

Some years ago, writing about such technological landscapes, I argued that you can understand what you’re looking at if you’re willing to invest a little effort:

At first glance, a substation is a bewildering array of hulking steel machines whose function is far from obvious. Ponderous tanklike or boxlike objects are lined up in rows. Some of them have cooling fins or fans; many have fluted porcelain insulators poking out in all directions…. If you look closer, you will find there is a logic to this mélange of equipment. You can make sense of it. The substation has inputs and outputs, and with a little study you can trace the pathways between them.

If I were writing that passage now, I would hedge or soften my claim that an electrical substation will yield its secrets to casual observation. Each morning in Baltimore I spent a few minutes peering into the Greene Street enclosure. I was able to identify all the major pieces of equipment in the open-air part of the station, and I know their basic functions. But making sense of the circuitry, finding the logic in the arrangement of devices, tracing the pathways from inputs to outputs—I have to confess, with a generous measure of chagrin, that I failed to solve the puzzle. I think I have the answers now, but finding them took more than eyeballing the hardware.


Basics first. A substation is not a generating plant. BGE does not “make” electricity here. The substation receives electric power in bulk from distant plants and repackages it for retail delivery. Transformer voltage plateAt Greene Street the incoming supply is at 115,000 volts (or 115 kV). The output voltage is about a tenth of that: 13.8 kV. How do I know the voltages? Not through some ingenious calculation based on the size of the insulators or the spacing between conductors. In an enlargement of one of my photos I found an identify­ing plate with the blurry and partially obscured but still legible notation “115/13.8 KV.”

The biggest hunks of machinery in the yard are the transformers (photo below), which do the voltage conversion. Each transformer is housed in a steel tank filled with oil, which serves as both insulator and coolant. Immersed in the oil bath are coils of wire wrapped around a massive iron core. Stacks of radiator panels, with fans mounted underneath, help cool the oil when the system is under heavy load. A bed of crushed stone under the transformer is meant to soak up any oil leaks and reduce fire hazards.

Greene Street transformer 6321

Electricity enters and leaves the transformer through the ribbed gray posts, called bushings, mounted atop the casing. A bushing is an insulator with a conducting path through the middle. It works like the rubber grommet that protects the power cord of an appliance where it passes through the steel chassis. The high-voltage inputs attach to the three tallest bushings, with red caps; the low-voltage bushings, with dark gray caps, are shorter and more closely spaced. Notice that each high-voltage input travels over a single slender wire, whereas each low-voltage output has three stout conductors. That’s because reducing the voltage to one-tenth increases the current tenfold.

What about the three slender gray posts just to the left of the high-voltage bushings? They are lightning arresters, shunting sudden voltage surges into the earth to protect the transformer from damage.

Perhaps the most distinctive feature of this particular substation is what’s not to be seen. There are no tall towers carrying high-voltage transmission lines to the station. PotheadsClearing a right of way for overhead lines would be difficult and destructive in an urban center, so the high-voltage “feeders” run under­ground. In the photo at right, near the bottom left corner, a bundle of three metal-sheathed cables emerges from the earth. Each cable, about as thick as a human forearm, has a copper or aluminum conductor running down the middle, surrounded by insulation. I suspect these cables are insulated with layers of paper impregnated with oil under pressure; some of the other feeders entering the station may be of a newer design, with solid plastic insulation. Each cable plugs into the bottom of a ceramic bushing, which carries the current to a copper wire at the top. (You can tell the wire is copper because of the green patina.)

Busbars 6318 Edit

Connecting the feeder input to the transformer is a set of three hollow aluminum conductors called bus bars, held high overhead on steel stanchions and ceramic insulators. At both ends of the bus bars are mechanical switches that open like hinged doors to break the circuit. I don’t know whether these switches can be opened when the system is under power or whether they are just used to isolate components for maintenance after a feeder has been shut down. Beyond the bus bars, and hidden behind a concrete barrier, we can glimpse the bushings atop a different kind of switch, which I’ll return to below.

Three phase waveformAt this point you might be asking, why does everything come in sets of three—the bus bars, the feeder cables, the terminals on the transformer? It’s because electric power is distributed as three-phase alternating current. Each conductor carries a voltage oscillating at 60 Hertz, with the three waves offset by one-third of a cycle. If you recorded the voltage between each of the three pairs of conductors (AB, AC, BC), you’d see a waveform like the one above at left.

At the other end of the conducting pathway, connected to three more bus bars on the low-voltage side of the transformer, is an odd-looking stack of three large drums. These

Choke coils 6323

are current-limiting reactors (no connection with nuclear reactors). They are coils of thick conductors wound on a stout concrete armature. Under normal operating conditions they have little effect on the transmission of power, but in the milliseconds following a short circuit, the sudden rush of current generates a strong magnetic field in the coils, absorbing the energy of the fault current and preventing damage to other equipment.


So those are the main elements of the substation I was able to spot from my hotel window. They all made sense individually, and yet I realized over the course of a few days that I didn’t really understand how it all works together. My doubts are easiest to explain with the help of a bird’s eye view of the substation layout, cribbed from Google Maps:

Google Maps view of Greene Street substation

My window vista was from off to the right, beyond the eastern edge of the compound. In the Google Maps view, the underground 115 kV feeders enter at the bottom or southern edge, and power flows northward through the transformers and the reactor coils, finally entering the building that occupies the northeast corner of the lot. Neither Google nor I can see inside this windowless building, but I know what’s in there, in a general way. That’s where the low-voltage (13.8 kV) distribution lines go underground and fan out to their various destinations in the neighborhood.

Let’s look more closely at the outdoor equipment. There are four high-voltage feeders, four transformers, and four sets of reactor coils. Apart from minor differences in geometry (and one newer-looking, less rusty transformer), these four parallel pathways all look alike. It’s a symmetric four-lane highway. Thus my first hypothesis was that four independent 115 kV feeders supply power to the station, presumably bringing it from larger substations and higher-voltage transmission lines outside the city.

However, something about the layout continued to bother me. If we label the four lanes of the highway from left to right, then on the high-voltage side, toward the bottom of the map view, it Metalclad MO 2alooks like there’s something connecting lanes 1 and 2 and, and there’s a similar link between lanes 3 and 4. From my hotel window the view of this device is blocked by a concrete barricade, and unfortunately the Google Maps image does not show it clearly either. (If you zoom in for a closer view, the goofy Google compression algorithm will turn the scene into a dreamscape where all the components have been draped in Saran Wrap.) Nevertheless, I’m quite sure of what I’m looking at. The device connecting the pairs of feeders is a high-voltage three-phase switch, or circuit breaker, something like the ones seen in the image at right (photographed at another substation, in Missouri.) The function of this device is essentially the same as that of a circuit breaker in your home electrical panel. You can turn it off manually to shut down a circuit, but it may also “trip” automatically in response to an overload or a short circuit. The concrete barriers flanking the two high-voltage breakers at Greene Street hint at one of the problems with such switches. Interrupting a current of hundreds of amperes at more than 100,000 volts is like stopping a runaway truck: It requires absorbing a lot of energy. The switch does not always survive the experience.

When I first looked into the Greene Street substation, I was puzzled by the absence of breakers at the input end of each main circuit. I expected to see them there to protect the transformers and other components from overloads or lightning strikes. I think there are breakers on the low-voltage side, tucked in just behind the transformers and thus not clearly visible from my window. But there’s nothing on the high side. I could only guess that such protection is provided by breakers near the output of the next substation upstream, the one that sends the 115 kV feeders into Greene Street.

That leaves the question of why pairs of circuits within the substation are cross-linked by breakers. I drew a simplified diagram of how things are wired up:

Circuit sketch

Two adjacent 115 kV circuits run from bottom to top; the breaker between them connects corresponding conductors—left to left, middle to middle, right to right. But what’s the point of doing so?

I had some ideas. If one transformer were out of commission, the pathway through the breaker could allow power to be rerouted through the remaining transformer (assuming it could handle the extra load). Indeed, maybe the entire design simply reflects a high level of redundancy. There are four incoming feeders and four transformers, but perhaps only two are expected to operate at any given time. The breaker provides a means of switching between them, so that you could lose a circuit (or maybe two) and still keep all the lights on. After all, this is a substation supplying power to many large facilities—the convention center (where the math meetings were held), a major hospital, large hotels, the ball park, theaters, museums, high-rise office buildings. Reliability is important here.

After further thought, however, this scheme seemed highly implausible. There are other substation layouts that would allow any of the four feeders to power any of the four transformers, allowing much greater flexibility in handling failures and making more efficient use of all the equipment. Linking the incoming feeders in pairs made no sense.


I would love to be able to say that I solved this puzzle on my own, just by dint of analysis and deduction, but it’s not true. When I got home and began looking at the photographs, I was still baffled. The answer eventually came via Google, though it wasn’t easy to find. Before revealing where I went wrong, I’ll give a couple of hints, which might be enough for you to guess the answer.

Hint 1. I was led astray by a biased sample. I am much more familiar with substations out in the suburbs or the countryside, partly because they’re easier to see into. Most of them are surrounded by a chain-link fence rather than a brick wall. But country infrastructure differs from the urban stuff.

Hint 2. I was also fooled by geometry when I should have been thinking about topology. To understand what you’re seeing in the Greene Street compound, you have to get beyond individual components and think about how it’s all connected to the rest of the network.

The web offers marvelous resources for the student of infrastructure, but finding them can be a challenge. You might suppose that the BGE website would have a list of the company’s facilities, and maybe a basic tutorial on where Baltimore’s electricity comes from. There’s nothing of the sort (although the utility’s parent company does offer thumbnail descriptions of some of their generating plants). Baltimore City websites were a little more helpful—not that they explained any details of substation operation, but they did report various legal and regulatory filings concerned with proposals for new or updated facilities. From those reports I learned the names of several BGE installations, which I could take back to Google to use as search terms.

One avenue I pursued was figuring out where the high-voltage feeders entering Greene Street come from. I discovered a substation called Pumphrey about five miles south of the city, near BWI airport, which seemed to be a major nexus of transmission lines. Balto Resco 1783In particular, four 115 kV feeders travel north from Pumphrey to a substation in the Westport neighborhood, which is about a mile south of downtown. The Pumphrey-Westport feeders are overhead lines, and I had seen them already. Their right of way parallels the light rail route I had taken into town from the airport. Beyond the Westport substation, which is next to a light rail stop of the same name, the towers disappear. An obvious hypothesis is that the four feeders dive underground at Westport and come up at Greene Street. This guess was partly correct: Power does reach Greene Street from Westport, but not exclusively.

At Westport BGE has recently built a small, gas-fired generating plant, to help meet peak demands. The substation is also near the Baltimore RESCO waste-to-energy power plant (photo above), which has become a local landmark. (It’s the only garbage burner I know that turns up on postcards sold in tourist shops.) Power from both of these sources could also make its way to the Greene Street substation, via Westport.

I finally began to make sense of the city’s wiring diagram when I stumbled upon some documents published by the PJM Interconnection, the administrator and coordinator of the power “pool” in the mid-Atlantic region. PJM stands for Pennsylvania–New Jersey–Maryland, but it covers a broader territory, including Delaware, Ohio, West Virginia, most of Virginia, and parts of Kentucky, Indiana, Michigan, and Illinois. Connecting to such a pool has important advantages for a utility. If an equipment failure means you can’t meet your customers’ demands for electricity, you can import power from elsewhere in the pool to make up the shortage; conversely, if you have excess generation, you can sell the power to another utility. The PJM supervises the market for such exchanges.

The idea behind power pooling is that neighbors can prop each other up in times of trouble; however, they can also knock each other down. As a condition of membership in the pool, utilities have to maintain various standards for engineering and reliability. PJM committees review plans for changes or additions to a utility’s network. It was a set of Powerpoint slides prepared for one such committee that first alerted me to my fundamental misconception. One of the slides included the map below, tracing the routes of 115 kV feeders (green lines) in and around downtown Baltimore.

Baltimore 115 kV ring from PJM map

I had been assuming—even though I should have known better—that the distribution network is essentially treelike, with lines radiating from each node to other nodes but never coming back together. For low-voltage distribution lines in sparsely settled areas, this assumption is generally correct. If you live in the suburbs or in a small town, there is one power line that runs from the local substation to your neighborhood; if a tree falls on it, you’re in the dark until the problem is fixed. There is no alternative route of supply. But that is not the topology of higher-voltage circuits. The Baltimore network consists of rings, where power can reach most nodes by following either of two pathways.

In the map we can see the four 115 kV feeders linking Pumphrey to Westport. From Westport, two lines run due north to Greene Street, then make a right turn to another station named Concord Street. As far as I can tell, there is no Concord Street in Baltimore. There’s a Concord Road, but it’s miles away in the northwest corner of the city. The substation is actually at 750 East Pratt Street, occupying the lower floors of an 18-story office tower.They continue east to Monument Street, then north again to Erdman, where the ring receives additional power from other lines coming down from the north. The ring then continues west to Center Street and returns to Westport, closing the loop. The arrangement has some clear advantages for reliability. You can break any one link in a ring without cutting power to any of the substations; the power simply flows around the ring in the other direction.

This double-ring architecture calls for a total reinterpretation of how the Greene Street substation works. I had imagined the four 115 kV inputs as four lanes of one-way traffic, all pouring into the substation and dead-ending in the four transformers. In reality we have just two roadways, both of which enter the substation and then leave again, continuing on to further destinations. And they are not one-way; they can both carry traffic in either direction. The transformers are like exit ramps that siphon off a portion of the traffic while the main stream passes by.

At Greene Street, two of the underground lines entering the compound come from Westport, but the other two proceed to Concord Street, the next station around the ring. What about the breakers that sit between the incoming and outgoing branches of each circuit? They open up the ring to isolate any section that experiences a serious failure. For example, a short circuit in one of the cables running between Greene Street and Concord Street would cause breakers at both of those stations to open up, but both stations would continue to receive power coming around the other branch of the loop.

This revised interpretation was confirmed by another document made available by PJM, this one written by BGE engineers as an account of their engineering practices for transmission lines and substations. It includes a schematic diagram of a typical downtown Baltimore substation. The diagram makes no attempt to reproduce the geometric layout of the components; it rearranges them to make the topology clearer.

Typical downtown Baltimore dist substation

The two 115 kV feeders that run through the substation are shown as horizontal lines; the solid black squares in the middle are the breakers that join the pairs of feeders and thereby close the two rings that run through all the downtown substations. The transformers are the W-shaped symbols at the ends of the branch lines.

A mystery remains. The symbol Switch represents a disconnect switch, a rather simple mechanical device that generally cannot be operated when the power line is under load. The Circuit switcher symbol is identified in the BGE document as a circuit switcher, a more elaborate device capable of interrupting a heavy current. In the Greene Street photos, however, the switches at the two ends of the high-voltage bus bars appear almost identical. I’m not seeing any circuit switchers there. But, as should be obvious by now, I’m capable of misinterpreting what I see.

Michael Geist: Hidden in Plain Sight?: The Search For Canadian Content on Netflix

The call for Internet and Netflix taxes are not the only demands raised by Canadian cultural groups regarding online video services. Many groups argue that the services should be required to make Canadian content more prominent, citing the challenge of “discoverability” of Canadian content in a world of seemingly unlimited choice. While the ACTRA call for government sanctions against search engines that refuse to prioritize Cancon in search results is an extreme example, many have asked the Broadcast and Telecommunications Legislative Review panel to either mandate that a certain percentage of the Netflix library consist of Canadian content or that it more actively promote Cancon on the service.

For example, Unifor wants to mandate that 20 per cent of Netflix television content be Canadian:

That 20 per cent of non-feature film programming available to subscribers be Canadian; that no less than 5 per cent of English language feature films be Canadian; and that no less than 8 per cent of French-language features films be Canadian.

Meanwhile, ACTRA calls for a 20 per cent across the board requirement:

All services offering on-demand programming content to Canadian consumers, including OTT services and music streaming services – regardless of the technology used to distribute the content – maintain a minimum of 20 per cent of Canadian content in the program selections offered to consumers.

Others focus on greater prominence for Canadian content. The CBC recommends:

Players operating in the Canadian system should provide appropriate prominence to Canadian content choices through search, menus and recommendations.

The CRTC hints at a similar requirement in the name of discoverability:

Whether it be music, podcasts, short‑form video, a one-hour drama series, feature-length film or any other type of content, regardless of what platform it is offered on, all stakeholders should be obligated and incented to promote and make content by Canadians discoverable, including government funding supports.

But how hard is it to find Canadian content on Netflix?  It turns out, not very.  Last weekend, I created a new Netflix account to see what someone with no algorithmic viewing history would find. I started with a simple search for Canada, which provided the following result featuring several Canadian shows (Kim’s Convenience, Frontier, and Schitt’s Creek).

Netflix Canada Search

The result also included the option to click on links for Canadian TV Shows, Canadian Movies, and more. Once I clicked on those links, dozens of shows and movies popped up.

 

Netflix Canada Search Results

After streaming about ten hours of Canadian shows – Schitt’s Creek, Kim’s Convenience, Frontier, and Heartland – I noticed that my main Netflix page now featured Canadian shows in the Popular on Netflix tab.

Popular on Netflix

and a new Celebrating Canadian People, Places, and Stories tab appeared on the main page.

Netflix Celebrating Canadian People, Places, and Stories

Not all of this content is strictly Cancon under the points system. Alongside “official” Cancon, there are programs filmed in Canada, starring Canadian actors, or featuring Canadian stories. Some might argue that only official Cancon counts. However, Canadian actors or local film production does matter: much of it counts toward Cancon points, benefits the country economically, and reflects a connection to the country. Regardless of how it is measured, however, the reality is that Netflix already has a sizable Canadian library, giving subscribers the option to watch hundreds of hours of Canadian content. Apparently, all it takes is a simple search for “Canada.”

The post Hidden in Plain Sight?: The Search For Canadian Content on Netflix appeared first on Michael Geist.

Tea Masters: Le Oolong est une création de la Chine rebelle

Zhuo Yan Oolong from Shan Lin Xi, winter 2018
 Comme j'ai reçu de nombreux de nombreux commentaires positifs sur mon dernier article en anglais sur les Oolongs de haute montagne, j'ai envie de le réécrire en français. Il ne s'agit pas d'une traduction: cela ressemblerait trop à du boulot et ce blog doit rester un endroit de joie et de partage autour du thé. C'est pourquoi, au lieu de parler du Oolong de haute montagne, je vais parler de son ancêtre, le Yan Cha de Wuyi, le premier Oolong qui apparut en Chine (et dans le monde). Le but de cet article est de vous expliquer que cet Oolong est le fruit de l'esprit de la Chine du Sud.

Comme tous les pays, la Chine n'est pas un territoire homogène, mais connait de fortes différences entre nord et sud, l'intérieur des terres et la partie costale. La géographie a un impact sur la culture et les personnes. Tout le monde sait que, de nos jours, les provinces côtières de la Chine qui commercent avec les autres pays sont plus prospères que les provinces de l'intérieur. Par contre, en Occident, on connait moins le fossé culturel entre la Chine du Nord et celle du Sud. Celui qui appliquerait la division française ou italienne (au nord il fait froid et on bosse, au sud il fait beau et on se la coule douce) aurait tout faux!
Les grandes plaines fertiles du nord sont le berceau de la Chine antique. En l'an 5 av. JC, seuls 10% des foyers chinois étaient dans le sud-est marécageux ou montagneux, chaud et humide. Les plaines du nord permettaient une agriculture céréalière simple et contrôlée par quelques grandes familles. C'est pourquoi on voyait les choses en grand dans le nord de la Chine. Les capitales y étaient immenses et carrées, comme des grands champs de céréale (blé, millet). Le pouvoir appartenait aux clans qui détenaient les plus grands terrains. Ils formaient l'aristocratie et avaient recours à des paysans peu éduqués et mal payés pour cultiver leurs terres. Et les revenus de l'Etat étaient surtout assis sur les rendements agricoles des terres les plus fertiles: celles du nord.
Les terres marécageuses du sud de la Chine étaient bien moins taxées et parfois gratuites, car elles étaient moins fertiles et plus morcelées. Chaque lopin nécessitait un travail préalable de drainage, de stabilisation... pour rendre la terre exploitable pour la culture du riz ou des fruits. Cela demandait un travail dur, plus technique et créatif. Mais de nombreux paysans Chinois préféraient être maitres de leur petite exploitation que serfs/locataires chez une grande famille. C'est pourquoi beaucoup allèrent s'installer dans le sud et dès 740 (dyanstie Tang), 40% des foyers Chinois se trouvaient au sud de la rivière Huai. Ce mouvement vers le sud s'accéléra en 1127 quand la dynastie Song du Nord se réfugia au Sud.

Les moines aussi préféraient installer leurs temples et monastères dans les montagnes, car ces endroits n'étaient pas taxés du tout, puisqu'il était très difficile d'y faire pousser de la nourriture. Le sud de la Chine fut donc l'eldorado des Chinois les plus entreprenants et résistants à l'autorité du pouvoir central et des grandes familles. Il en résulte donc un tempérament et un état d'esprit très différent selon qu'on vit au nord ou au sud de la Chine. Au nord, à Pékin notamment, on pense tout très grand (grandes plaines, grands projets, grandes entreprises...). Le pouvoir est centralisateur et concentré en peu de mains. Les solutions et les produits sont standards comme les céréales. Au sud, par contre, on pense PME, liberté d'entreprendre, indépendence, créativité...
Photo de Rattana Vandy
Ainsi, les monts de Wuyi dans le Fujian sont un nid d'indépendence. Peu après la révolution de 1911, c'est à Wuyi que les paysans fondèrent la première cellule du parti communiste chinois! A l'époque, c'était le signe de l'âme rebelle et ouverte aux idées nouvelles de ces Chinois du sud. Durant la dynastie Qing, ces fermiers avaient fait déjà preuve d'une ingéniosité caractéristique de leur culture du sud: ils avaient inventé le Oolong, le thé le plus complexe à réaliser. De ce point de vue, ils sont un peu comme nos vignerons de Bourgogne qui font de très grands vins sur des terres froides et incultes.

En effet, depuis la dynastie Ming à nos jours, dans le nord de la Chine, on aime surtout boire le thé vert. A la base, ce sont des feuilles de théier séchées très rapidement pour qu'elles n'oxydent pas. Leur transformation est donc minimale et ne demande pas trop de savoir-faire. Leurs arômes sont frais et la qualité dépend surtout de la taille des bourgeons et du moment de la récolte: plus on récolte tôt au printemps, plus le temps est frais et donc plus les arômes sont fins. En terme de thé, c'est ce qui se fait de plus standard ou de plus facile à classifier (par date de récolte ou nombre de feuille au kg).

Le génie des fermiers de Wuyi est d'avoir pensé le Oolong comme des Chinois du sud. Au lieu de faire simple, ils allaient faire compliqué. Au lieu de cultiver de grandes plantation en plaine, ils allaient cultiver les théiers sur des petites parcelles pentues et rocailleuses de Wuyi. Mais afin de rester compétitif, au lieu de récolter les feuilles au stade de bourgeon, ils attendraient qu'elles soient matures, mais encore tendres. Le climat de ces monts du sud, bercés par une rivière, permet justement que ces théiers ne reçoivent pas trop de soleil et que l'air soit humide afin de bien préserver la fraicheur des feuilles. (On retrouve ces mêmes principes pour le Oolong de haute montagne de Taiwan). Ensuite, les fermiers oxydent les feuilles avec beaucoup de savoir-faire pour obtenir un degré d'oxydation qui exprime le mieux leur caractètre. Et le process se conclut par une torréfaction intense mais délicate qui préserve la fraicheur des arômes pendant des dizaines d'années! 

Ils obtinrent un thé radicalement différent du thé vert. En fait, ils n'obtinrent pas seulement un thé, mais des dizaines, voire des centaines de thés différents selon le pic où il est planté, quel cultivar est utilisé, quel style de torréfaction est appliqué... Le Oolong est donc le thé le moins standardisé, le plus individuel. Il y en a pour tous les goûts et toutes les bourses. Mais la fierté et le succès des fermiers de Wuyi, c'est que la qualité de leur Yan Cha fut reconnue très rapidement. Dès la dynastie Qing, on a des actes de vente qui montrent que les meilleures feuilles de Wuyi, le Da Hong Pao, se vendaient au prix de l'or! 
Photo de Rattana Vandy
De nos jours, on retrouve ces 2 états d'esprit dans la commercialisation du thé. D'un côté, on a l'approche du nord dans ces grandes expositions où priment le nombre d'exposants et la surface des locaux. Là, on te demande combien tu as de succursales, combien de tonnes de thé tu vends, combien d'employés tu commandes, combien de profits tu génères... L'emballage et le marketing comptent plus que le produit. A l'opposé, on a l'approche du sud qui fait la place belle à la qualité du produit, au travail créatif des fermiers, aux endroits reculés, aux hautes montagnes, aux feuilles mordues par des insectes, aux récoltes anciennes.

De plus, le Oolong n'est pas une boisson de pouvoir avec laquelle on impressionne ses invités. Pour cela, son infusion est beaucoup trop technique et facétieuse. Quand Xi Jinping invite un autre chef d'Etat à boire du thé, il le fait toujours préparer par un maitre de thé (ou une jolie maitresse de thé)! Voilà comment on aime boire le thé dans le nord de la Chine, préparé par une tierce personne à votre service. Dans le sud, au contraire, le thé est bu à la manière de Chaozhou, en petits groupes de trois personnes de même rang. Souvent, le chef d'entreprise fait le thé pour ses clients. C'est fait pour être plus personnel et tisser de meilleurs liens humains.

Conclusion: de tous points de vue (culture, fabrication, préparation, vente), le Oolong est l'expression de l'âme créative et rebelle de la Chine du sud. Et c'est aussi pour cela que je l'aime!!!

Jesse Moynihan: New Studio = New Habits

MattCha's Blog: white2tea’s “Sales”, Sweepstakes Marketing, and Pressing 3KG Puerh Cakes


White2tea is good at running promotions and I find them super interesting, fun, and pretty brilliant.  White2tea calls them a “Sale” but by definition a sale is a period during which a shop or dealer sells goods at reduced prices. The interesting thing is that during these so called white2tea “sales”, puerh is actually not sold at reduced prices- the price of tea doesn’t change during these “sales” at all.  When Paul sets a price on his puerh, it rarely moves.  It actually never is reduced.  Ever.  But on the positive, if he does raise prices on his puerh, it is usually only a very small yearly increase smaller than most other western vendors, I think. 

I can respect that he sets the price and sticks to it.  But for a puerh company that is so concerned about the language of describing puerh age and terroir, I find it interesting that they are not as concerned about the actual meaning of words like “sale” or “aged puerh”.  The double standard sure makes me chuckle and shake my head.  Can anyone else explain this to me?  Am I missing something here?

I suppose you could consider free shipping an overall savings but I don’t think free shipping, by definition, could be considered a sale.  Although, I think it is a reduction in the actual price of the tea.  Ok, enough of that… hahaha.

This article is supposed to be about those epic huge 3KG puerh tea cakes that white2tea has pressed…

So this is how the story unfolds…

Back a year or so ago I wrote an article called The Xiaobinging of the Puerh Industry.  In that article I criticized the pressing of small 200g or smaller cakes and the Western producers who perpetuate this trend.  White2tea has not pressed a cake larger than 200g since 2014, so they fall squarely into this category.

At the end of that article I stated the following:

I like the big, chunky, beefy, robust, old school feeling of these cakes and the industry that they represent.  The larger, the better!  1Kg, even 2Kg, cakes and bricks- "bring'em on" I say.  In fact, I challenge vendors to release one of these big guys in response to the xiao binging of the puerh industry.  People will buy- I’ll be the first one.

Then, on Black Friday, in dramatic fashion white2tea offers a sweepstakes giveaway of one 3Kg shu puerh cake, with a Neifi that offers a response to this criticism.  If the Snoozefest cake was in response to tinklefor then this giant puerh must be in response to this article, I think.  So, of course, naturally I want in on this giant cake.  I literally stated (above) that I’ll be the first one!

But it’s not so simple, I have to place an order to have a shot at it.  So, I do because I was planning on getting this cake from white2tea anyways (and its price isn’t being reduced anytime soon).   So why not wait until Black Friday?  At the very least I can save on the shipping, right?

So, I really felt it’s my destiny to win this Qing bing 3KG cake.  I bet you thought you would win too?  I mean we have no idea what the odds are at winning because that depends on how many people place an order over the promotional days.  So we are blind to that.  This type of marketing is new from white2tea and plays on our susceptibilities in thinking that we somehow have a greater chance of winning over others.

Darn, I didn’t win that 3KG shu cake…. Oh well wasn’t meant to be…

But that sample cake I ordered was pretty damn good… I think I would like a bit more of the puerh.  I need to re-order.  But when?

It just so happens that in my first sampling of the cake there is a white2tea New Year’s Sale and guess what?  They have many more giant cakes to give away.  The 3KG sheng puerh especially is calling for me, but I need to place an order to be entered.  Well, if I didn’t win the last Qing Bing cake sweepstakes then surely this sheng puerh version (which I primarily drink these days) is mine.  So the old gambler’s fallacy probably got the best of me.  I re-order and…

Darn, I didn’t win that 3KG sheng cake (no crazy large white tea cakes either)…. Oh well wasn’t meant to be…

In the end, I was empty handed so the jokes on me… hahaha

But man, did I ever order some nice puerh which I was planning on ordering anyways.  The whole promotional side show was just a bit of fun layered on top.  Thanks for the ride Paul (Twodog), it was glorious.

Interesting how sweepstakes marketing gets you…
It sure got me.

Peace

CreativeApplications.Net: Terra Mars – ANN’s topography of Mars in the visual style of Earth

Terra Mars – ANN’s topography of Mars in the visual style of Earth
Created by SHI Weili, For this project, Terra Mars is a speculative visualisation by an ANN (artificial neural network) to generate images that resemble satellite imagery of Earth modelled on topographical data of Mars. Terra Mars suggests a new approach to creative applications of artificial intelligence—using its capability of remapping to broaden the domain of […]

Charles Petzold: Reading “Little Dancer Aged Fourteen”

I've always been both fascinated and disturbed by Degas' paintings of young ballerinas. The tutus are so frilly, the legs and arms exquisitely posed even when they're not dancing, but the faces are often turned away from us, and when we do see their faces, they are deliberately smudged, or appear pained, weary, and exhausted, so unlike the radiant faces of Renoir's young women. There may be beauty in the ballerinas' poses and movement, but no joy in their execution. Rarely do we glimpse a smile.

... more ...


churchturing.org / 2019-02-20T13:21:02