Bifurcated Rivets: From FB

message

Bifurcated Rivets: From FB

Ruby Keeler

Bifurcated Rivets: From FB

Untitled

Bifurcated Rivets: From FB

Blue Bossa

Bifurcated Rivets: From FB

Why would you cut this?

Recent additions: phonetic-languages-ukrainian-array 0.2.1.0

Added by OleksandrZhabenko, 2021-07-24T12:01:54Z.

Prepares Ukrainian text to be used as a phonetic language text

Recent additions: phonetic-languages-phonetics-basics 0.8.1.0

Added by OleksandrZhabenko, 2021-07-24T11:59:16Z.

A library for working with generalized phonetic languages usage.

Recent additions: token-limiter-concurrent 0.0.0.0

Added by Norfair, 2021-07-24T11:59:01Z.

A thread-safe concurrent token-bucket rate limiter that guarantees fairness

Recent additions: streamly-lmdb 0.3.0

Added by shlok, 2021-07-24T11:10:14Z.

Stream data to or from LMDB databases using the streamly library.

Hackaday: ESP8266 Adds WiFi Logging to IKEA’s Air Quality Sensor

Introduced back in June, the IKEA VINDRIKTNING is a $12 USD sensor that uses colored LEDs to indicate the relative air quality in your home depending on how many particles it sucks up. Looking to improve on this simplistic interface, [Sören Beye] tacked an ESP8266 to the board so it can broadcast sensor readings out over MQTT.

Just three wires link the ESP8266 to the PCB.

While some of us would have been tempted to gut the VINDRIKTNING and attach its particle sensor directly to the ESP8266, the approach [Sören] has used is actually quite elegant. Rather than replacing IKEA’s electronics, the microcontroller is simply listening in on the UART communications between the sensor and the original controller. This not only preserves the stock functionality of the VINDRIKTNING, but simplifies the code as the ESP doesn’t need to do nearly as much.

All you need to do if you want to perform this modification is solder a couple wires to convenient test pads on the VINDRIKTNING board, then flash the firmware (or write your own version), and you’re good to go. There’s plenty of room inside the case for the ESP8266, though you may want to tape it down so it doesn’t impact air flow.

While not required, [Sören] also recommends making a small modification to the VINDRIKTNING which makes it a bit quieter. Apparently the 5 V fan inside the sensor is occasionally revved up by the original controller, rather than kept at a continuous level that you can mentally tune out. But by attaching the sensor’s fan to the ESP8266’s 3.3 V pin, it will run continuously at a lower speed.

We’ve seen custom firmware for IKEA products before, but this approach, which keeps the device’s functionality intact regardless of what’s been flashed to the secondary microcontroller, is particularly appealing for those of us who can’t seem to keep the gremlins out of our code.

[Thanks to nexgensri for the tip.]

MetaFilter: Double victory points for every indigenous village you enslave

The Board Games That Ask You to Reenact Colonialism. A newish wave of sophisticated, adult board games have made exploitation part of their game mechanics. A reckoning is coming. "Puerto Rico is the only game I ever turned down even a single trial play of, because of a literal curl of my lip in distaste as I was being taught the game."

Recent additions: phonetic-languages-phonetics-basics 0.8.0.0

Added by OleksandrZhabenko, 2021-07-24T10:33:05Z.

A library for working with generalized phonetic languages usage.

Slashdot: Society Is Right On Track For a Global Collapse, New Study of Infamous 1970s Report Finds

fahrbot-bot shares a report from Live Science: Human society is on track for a collapse in the next two decades if there isn't a serious shift in global priorities, according to a new reassessment of a 1970s report, Vice reported. In that report -- published in the bestselling book "The Limits to Growth" (1972) -- a team of MIT scientists argued that industrial civilization was bound to collapse if corporations and governments continued to pursue continuous economic growth, no matter the costs. The researchers forecasted 12 possible scenarios for the future, most of which predicted a point where natural resources would become so scarce that further economic growth would become impossible, and personal welfare would plummet. The report's most infamous scenario -- the Business as Usual (BAU) scenario -- predicted that the world's economic growth would peak around the 2040s, then take a sharp downturn, along with the global population, food availability and natural resources. This imminent "collapse" wouldn't be the end of the human race, but rather a societal turning point that would see standards of living drop around the world for decades, the team wrote. So, what's the outlook for society now, nearly half a century after the MIT researchers shared their prognostications? Gaya Herrington, a sustainability and dynamic system analysis researcher at the consulting firm KPMG, decided to find out. [...] Herrington found that the current state of the world -- measured through 10 different variables, including population, fertility rates, pollution levels, food production and industrial output -- aligned extremely closely with two of the scenarios proposed in 1972, namely the BAU scenario and one called Comprehensive Technology (CT), in which technological advancements help reduce pollution and increase food supplies, even as natural resources run out. While the CT scenario results in less of a shock to the global population and personal welfare, the lack of natural resources still leads to a point where economic growth sharply declines -- in other words, a sudden collapse of industrial society. "The good news is that it's not too late to avoid both of these scenarios and put society on track for an alternative -- the Stabilized World (SW) scenario," the report notes. "This path begins as the BAU and CT routes do, with population, pollution and economic growth rising in tandem while natural resources decline. The difference comes when humans decide to deliberately limit economic growth on their own, before a lack of resources forces them to." "The SW scenario assumes that in addition to the technological solutions, global societal priorities change," Herrington wrote. "A change in values and policies translates into, amongst other things, low desired family size, perfect birth control availability, and a deliberate choice to limit industrial output and prioritize health and education services." After this shift of values occurs, industrial growth and global population begin to level out. "Food availability continues to rise to meet the needs of the global population; pollution declines and all but disappears; and the depletion of natural resources begins to level out, too," adds Live Science. "Societal collapse is avoided entirely."

Read more of this story at Slashdot.

Recent CPAN uploads - MetaCPAN: Text-PO-v0.1.0

Read and write PO files

Recent CPAN uploads - MetaCPAN: Module-Generic-v0.15.4

Generic Module to inherit from

Hackaday: How the Flipper Zero Hacker Multitool Gets Made and Tested

Flipper Zero is an open-source multitool for hackers, and [Pavel] recently shared details on what goes into the production and testing of these devices. Each unit contains four separate PCBs, and in high-volume production it is inevitable that some boards are faulty in some way. Not all faults are identical — some are not even obvious —  but they all must be dealt with before they end up in a finished product.

One of several custom test jigs for Flipper Zero. Faults in high volume production are inevitable, and detecting them early is best.

Designing a process to effectively detect and deal with faults is a serious undertaking, one the Flipper Zero team addressed by designing a separate test station for each of the separate PCBs, allowing detection of defects as early as possible. Each board gets fitted into a custom test jig, then is subjected to an automated barrage of tests to ensure everything is as expected before being given the green light. A final test station gives a check to completed assemblies, and every test is logged into a database.

It may seem tempting to skip testing the individual boards and instead just do a single comprehensive test on finished units, but when dealing with production errors, it’s important to detect issues as early in the workflow as possible. The later a problem is detected, the more difficult and expensive it is to address. The worst possible outcome is to put a defective unit into a customer’s hands, where a issue is found only after all of the time and cost of assembly and shipping has already been spent. Another reason to detect issues early is that some faults become more difficult to address the later they are discovered. For example, a dim LED or poor antenna performance is much harder to troubleshoot when detected in a completely assembled unit, because the fault could be anywhere.

[Pavel] provides plenty of pictures and details about the production of Flipper Zero, and it’s nice to see how the project is progressing since its hyper-successful crowdfunding campaign.

MetaFilter: If we can soar ...

What Birmingham Roller Pigeons Offer the Men of South Central

Birmingham Rollers are a kind of domesticated pigeon that, well ... watch them do their thing. How they got from their roots in the English midlands to popularity in LA is the subject of this compelling long read.

Slashdot: Oregon Congressman Proposes New Space Tourism Tax

U.S. Rep. Earl Blumenauer (D-Oregon) plans to introduce legislation called the Securing Protections Against Carbon Emissions (SPACE) Tax Act, which would impose new excise taxes on space tourism trips. Space.com reports: "Space exploration isn't a tax-free holiday for the wealthy. Just as normal Americans pay taxes when they buy airline tickets, billionaires who fly into space to produce nothing of scientific value should do the same, and then some," Blumenauer said in a statement issued by his office. "I'm not opposed to this type of space innovation," added Blumenauer, a senior member of the House of Representatives' Ways and Means Committee. "However, things that are done purely for tourism or entertainment, and that don't have a scientific purpose, should in turn support the public good." The proposed new tax would likely be levied on a per-passenger basis, as is done with commercial aviation, the statement said. "Exemptions would be made available for NASA spaceflights for scientific research purposes," the statement reads. "In the case of flights where some passengers are working on behalf of NASA for scientific research purposes and others are not, the launch excise tax shall be the pro rata share of the non-NASA researchers." There would be two taxation tiers, one for suborbital flights and another for missions that reach orbit. The statement did not reveal how much the tax would be in either case or if the collected revenue would be earmarked for any specific purpose. Such a purpose could be the fight against climate change, if the proposed act's full name is any guide. Blumenauer is concerned about the potential carbon footprint of the space tourism industry once it gets fully up and running, the statement said.

Read more of this story at Slashdot.

Disquiet: Moon

Nearly Midnight
      nO passing cars
      nO sidewalk chatter 
   sileNt movie

Hackaday: Joker Monitor Keeps an Eye on Hazardous Gas Levels

The Joker is a popular character in the Batman franchise, and at times uses poisonous gases as part of his criminal repertoire. That inspired this fun project by [kutluhan_aktar], which aims to monitor the level of harmful gases in the air.

The project doesn’t use just one gas sensor, but several! It packs the MQ-2, MQ-3, MQ-4, MQ-6, and MQ-9. This gives it sensitivity to a huge variety of combustible gases, as well as detecting carbon monoxide. The sensors are read by an Arduino Nano, which displays results on an RGB LED as well as an attached IPS screen.

Readings from each sensor can be selected by using an infrared remote. In order to best work as a safety device, however, it could be more useful to have the Arduino automatically cycle through each sensor, checking them periodically and raising an alarm in the event of a high reading.

The whole project is built on a custom PCB which is artfully constructed with an image of the Joker himself. It helps to make the project a bit more of a display piece, and speaks to the aesthetic skills of its creator.

It’s a fun build, and one that could be mighty capable with a few software tweaks. With that said, if you’re working in a space with real hazards from combustible gases, it may be worth investing in some properly rated safety equipment rather than relying on an Arduino project.

Incidentally, if you’d like to improve the results from using such gas sensors, we’ve looked at that in the past. Video after the break.

Explosm.net: Comic for 2021.07.24

New Cyanide and Happiness Comic

Slashdot: Maker of Dubious $56K Alzheimer's Drug Offers Cognitive Test No One Can Pass

An anonymous reader quotes a report from Ars Technica: Do you ever forget things, like a doctor's appointment or a lunch date? Do you sometimes struggle to think of the right word for something common? Do you ever feel more anxious or irritable than you typically do? Do you ever feel overwhelmed when trying to make a decision? If you answered "no, never" to all of those questions, there's a possibility that you may not actually be human. Nevertheless, you should still talk to a doctor about additional cognitive screenings to check if you have Alzheimer's disease. At least, that's the takeaway from a six-question quiz provided in part by Biogen, the maker of an unproven, $56,000 Alzheimer's drug. The six questions include the four above, plus questions about whether you ever lose your train of thought or ever get lost on your way to or around a familiar place. The questions not only bring up common issues that perfectly healthy people might face from time to time, but the answers any quiz-taker provides are also completely irrelevant. No matter how you answer -- even if you say you never experience any of those issues -- the quiz will always prompt you to talk with your doctor about cognitive screening. The results page even uses your zip code to provide a link to find an Alzheimer's specialist near you. Biogen says the quiz website is part of a "disease awareness educational program." But it appears to be part of an aggressive strategy to sell the company's new Alzheimer's drug, Aduhelm, which has an intensely controversial history, to say the least. What's the controversial history you may ask? According to Ars, the drug "flunked out of two identical Phase III clinical trials in 2019." A panel of expert advisors for the FDA overwhelmingly voted against approval, yet it still was approved by the FDA on June 7. It also has a list price of $56,000 for a year's supply. The report goes on to say that the company is basically making up the statistic that "about 1 in 12 Americans 50 years and older" has mild cognitive impairment due to Alzheimer's. Experts say they know of no evidence to back up that statistic and it appears to be a significant overestimate. Furthermore, two medical experts from Georgetown University said the company's quiz website "appears designed to ratchet up anxiety in anyone juggling multiple responsibilities or who gets distracted during small talk." They added: "Convincing perfectly normal people they should see a specialist, be tested for amyloid plaque, and, if present, assume they have early Alzheimer's is a great strategy for increasing Aduhelm prescriptions... [It] could lead to millions of prescriptions -- and billions of dollars in profit -- for an ineffective and expensive drug."

Read more of this story at Slashdot.

MetaFilter: The Jessica Simulation

The death of the woman he loved was too much to bear. Could a mysterious website allow him to speak with her once more? A long-form essay from the San Francisco Chronicle.

OpenAI's GPT-3 previously, a lot of 2020 AI news previouslier.

An older take on a chatbot built to assist with grieving the loss of a loved one here.

Author Jason Fagone's work discussed on MeFi previously.

Developer Jason Rohrer discussed for thought-provoking indie games on MeFi previously and previously, previously, previouslier, and previousliest.

Slashdot: Hole Blasted In Guntrader: UK Firearms Sales Website's CRM Database Breached, 111K Users' Info Spilled Online

Criminals have hacked into a Gumtree-style website used for buying and selling firearms, making off with a 111,000-entry database containing partial information from a CRM product used by gun shops across the UK. The Register reports: The Guntrader breach earlier this week saw the theft of a SQL database powering both the Guntrader.uk buy-and-sell website and its electronic gun shop register product, comprising about 111,000 users and dating between 2016 and 17 July this year. The database contains names, mobile phone numbers, email addresses, user geolocation data, and more including bcrypt-hashed passwords. It is a severe breach of privacy not only for Guntrader but for its users: members of the UK's licensed firearms community. Guntrader spokesman Simon Baseley told The Register that Guntrader.uk had emailed all the users affected by the breach on July 21 and issued a further update yesterday. Guntrader is roughly similar to Gumtree: users post ads along with their contact details on the website so potential purchasers can get in touch. Gun shops (known in the UK as "registered firearms dealers" or RFDs) can also use Guntrader's integrated gun register product, which is advertised as offering "end-to-end encryption" and "daily backups", making it (so Guntrader claims) "the most safe and secure gun register system on today's market." [British firearms laws say every transfer of a firearm (sale, drop-off for repair, gift, loan, and so on) must be recorded, with the vast majority of these also being mandatory to report to the police when they happen...] The categories of data in the stolen database are: Latitude and longitude data; First name and last name; Police force that issued an RFD's certificate; Phone numbers; Fax numbers; bcrypt-hashed passwords; Postcode; Postal addresses; and User's IP addresses. Logs of payments were also included, with Coalfire's Barratt explaining that while no credit card numbers were included, something that looks like a SHA-256 hashed string was included in the payment data tables. Other payment information was limited to prices for rifles and shotguns advertised through the site. The Register recommends you check if your data is included in the hack by visiting Have I Been Pwned. If you are affected and you used the same password on Guntrader that you used on other websites, you should change it as soon as possible.

Read more of this story at Slashdot.

Hackaday: PnPAssist: A “Smart” Build Platform For Manual PCB Assembly

Open source pick and place machines have come a long way in the past years, but are not necessarily worth the setup time and machine cost if you are only building a few PCBs at a time. [Nuri Erginer] found himself in this situation regularly, so he created PnPAssist, a “smart” build platform to speed up manual PCB assembly. Video after the break.

The PnP assist consists of a small circular platform that can automatically translate and rotate to place the current footprint in the middle of the platform, right in the center of your microscope’s view, and a laser crosshair. The entire device can also rotate freely on its base to avoid contorting your arm to match the footprint orientation. Just export the PnP file from your favorite PCB design software, load it on a micro SD card, plug it into the PnPAssist, and start assembling. The relevant component information is displayed on a small OLED display right on the machine. [Nuri] has also created a component organizing tray that will indicate the correct compartment with an RGB LED.

Below the build platform, a 3D printed gear is in contact with a pair of parallel lead screws driven by stepper motors. The relative motion of the lead screws allows the platform to rotate, translate, or both. This arrangement also means the machine is a lot more compact than a conventional XY-table and can be packed away when not in use. The base is held firmly in place on the workbench with a set of suction cups or screws. Power is provided through the fixed base using a slip-ring, so there are no cables to twist up as you spin the machine around.

We can certainly see this machine being a massive help on any small electronics assembly job, especially considering the fast setup time and relative simplicity. It will also work well with the 3D printed component dispensers or component turntable we featured in the past.

Slashdot: Facebook Details Experimental Mixed Reality and Passthrough API

Facebook shared some details about its experimental Passthrough API to enable new kinds of mixed reality apps for Oculus Quest 2. UploadVR reports: The feature may also serve as the foundation for the company's long-term efforts in augmented reality, effectively turning Quest 2 into a $299 AR developer kit. When asked if the feature is coming to the original Oculus Quest, a Facebook representative replied "today, this is only available for Quest 2." The new feature will be available to Unity developers in an upcoming software development kit release "with support for other development platforms coming in the future." Facebook says apps using the API "cannot access, view, or store images or videos of your physical environment from the Oculus Quest 2 sensors" and raw images from the four on-board cameras "are processed on-device." The following capabilities will be available with the passthrough API, according to Facebook: "Composition: You can composite Passthrough layers with other VR layers via existing blending techniques like hole punching and alpha blending. Styling: You'll be able to apply styles and tint to layers from a predefined list, including applying a color overlay to the feed, rendering edges, customizing opacity, and posterizing. Custom Geometry: You can render Passthrough images to a custom mesh instead of relying on the default style mesh -- for example, to project Passthrough on a planar surface."

Read more of this story at Slashdot.

MetaFilter: Okay cheers then thanks then cheers okay cheers thanks cheers...

Don't trust Bigipedia (previously)? Want something more trustworthy and less physically possible? Look no further than The Museum of Everything, the eighteen-episode comedy audio sketch series with a dash of magical realism - so don't sweat the impossibility of a provincial museum just off the M3 that's curated by Tom Waits and contains literally everything (except maybe Badgerland (animated episode 3)). Well, not until you get to the... GIFT SHOP. (aaahhh...)

Recent CPAN uploads - MetaCPAN: App-IndonesianHolidayUtils-0.064

List Indonesian holidays

Changes for 0.064 - 2021-07-24

  • No functional changes.
  • [build] Rebuild to prettify usages.

Recent CPAN uploads - MetaCPAN: Complete-Bash-0.336

Completion routines for bash shell

Changes for 0.336 - 2021-07-24

  • Don't pass to fzf if inside Emacs.

Recent CPAN uploads - MetaCPAN: Tree-Term-20210724

Create a parse tree from an array of terms representing an expression.

MetaFilter: Jerusalem Demsas on progressive obstructionism in blue states

Jerusalem Demsas on progressive obstructionism which prevents Democratic-run states like California from building infrastructure and housing, making them outrageously expensive. "I thought that I was going to ride the Purple Line [a project that's been delayed for 20 years] when I was in high school. And that never happened. And people are really mad. So you have a situation here, where a very few people have managed to proffer up a bunch of facially neutral, race neutral, class neutral, explanations for why it's a bad idea to build a public works project. And at the end of the day, the people who have suffered the most are domestic workers who are taking multiple bus lines, or having to figure out other ways to get to work every single day. And they're bearing the cost of all of that."

Demsas discusses some alternative theories. Is the problem technical difficulty? High labor costs?
So her research — Brooks and her co-author Zach Liscow who's at Yale — they look into this problem of, why is it that highways have gotten much more expensive to build? And it's an interesting question separate from transit. Because with transit, we don't do it that often. But we build highways all the time here. We lengthen them, we build interchanges, we keep them up, we maintain them.

So we should be very good at it. And in a lot of ways, we are. And they're able to rule out a lot of the traditional explanations, things like, it's either unions, or it's the geography of the places that we're talking about — it's just getting more difficult to build, because we're building in harder and harder geographies for whatever reason. And so, they rule out these kinds of explanations.

And what they're left with is this concept they call "citizen voice." And there are regulations that have been put in place, that a lot of times come from a good place. They're saying the government should not be able to steamroll over communities — in particular marginalized communities. There are many instances in the 20th century of the government building highways through minority communities, and really destroying them, and creating a lot of negative impacts. And so, one of the big regulations that comes out in 1970 is the National Environmental Protection Act.

And this is meant to ensure that the government — if it's either doing a federal project, or a project that is receiving federal money — needs to do an environmental impact statement, and make sure they're engaging the community properly, so that you don't get these massive harms accruing to these local communities, because the government's just stomping through them. What ends up happening is what ends up happening a lot of the time when you increase participatory democracy at the local level. Which is that, it is not used by people who are marginalized.

It is [very infrequently] ever used by people who are really concerned about the fact that the government is not representative of their interests. Who it's used by is, very frequently, individuals who are very wealthy, who are white, who are already privileged in the political system, to stop transportation, and to stop public works projects, or anything that might be broadly beneficial to the community, from being placed in their neighborhoods.
This is true even in progressive neighborhoods.

Demsas on infrastructure: Why does it cost so much to build things in America?

On housing: Covid-19 caused a recession. So why did the housing market boom?

Similar observations from Matthew Yglesias on the high cost of building infrastructure: We need more ambition in the parts of the country where progressives can win.

things magazine: Sounds and sights from beyond space

A very random collection of things. Fire Maidens of Outer Space (1956) / water simulation on kinetic displays / photorealist paintings by Ben Weiner / love this: noclip, unfettered exploration of classic 3D gaming environments in a browser (via b3ta) … Continue reading

Hackaday: A Nerf Ball Turret Complete With FPV

Sentry turrets have long been a feature of science fiction films and video games. These days, there’s nothing stopping you from building your own. [otjones99] has done just that, with his FPV Nerf Ball launcher.

The system works on the basic principle of launching soft foam balls via a pair of counter-rotating wheels. It’s a remarkably simple way of electrically launching projectiles without a lot of fuss and mucking around, and it works well here. A blower fan is used to gently roll ammunition towards the launcher wheels as required. There’s a hopper-style clip which uses a servo to drop one ball at a time into the launching tube.

An Arduino Uno is responsible for slewing the turret, and handling the firing process. A joystick is fitted with an NRF24L01 radio module to send signals to the Arduino to aim the turret, while an FPV camera mounted on the turret allows the user to remotely see what the turret is aiming at. With a simple pull of the joystick’s trigger, the turret opens fire.

It’s a fun build, and one that shouldn’t do too much damage to anything given the soft pliable nature of the Nerf ammunition. Of course, if you don’t want to aim your turret yourself, you can always go ahead and build yourself an automated sentry gun. Video after the break.

ScreenAnarchy: Review: SETTLERS, Wherever We Go, There We Are

Sofia Boutella, Ismael Cruz Cordova and Brooklynn Prince star in a sci-fi thriller, written and directed by Wyatt Rockefeller.

[Read the whole post on screenanarchy.com...]

Colossal: An Innovative Drill-Bit Shaped Pen Holds Ink Around a Grooved Spiral

All images couresty of Drillog

Inventive design and age-old craft converge in a simple writing instrument produced by the CNC-machining factory Shion. As its name suggests, Drillog is a drillbit-shaped pen that holds the ink in its thin grooves that spiral up the side of the nib. Whereas traditional quills require repeated dips and the more modern fountain pen suspends the pigmented liquid in an internal reservoir for longer use, a single dunk of the Drillog should retain enough ink to smoothly fill an A4 size paper.

Its interchangeable barrels come in dozens of style-and-color combinations, and the Japanese company even released a miniature palette with tiny wells designed to reduce spillage. There are just under 40 days to back the fully-funded project on Kickstarter, and you can find out more about the aluminum pen on the Drillog site and Twitter. (via Core77)

 

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: Woke nation

Regular readers – people who have no real lives and are addicted to dog pictures – will recall my angst when SJWs scoffed my big flag three weeks ago. No, it has not turned up. The cops were stymied. That prized possession is gone, gone, sadly.

My crime was to celebrate Canada Day. On Canada Day. With a Canadian flag. In Canada. The theft was not enough punishment, however. The same crew took to Facebook and called me “white privileged”, “tone-deaf”, “elitist” and “racist” for having hoisted the banner. You may recall I wrote a blog about the incident, and I made note that all the people who jammed my corporate email, all those who burst into my office to berate me plus the ones leading the social media assault were female. “Young women, coincidentally,” I scribbled, because I found that interesting.

Well, now I’m “misogynist”, too.

These are disappointing times to be an old, white, male who owns stuff. Like a building on which to hang a flag. But it’s not just an issue of Canada, patriotism, Indigenous reconciliation, race or gender. It seems as a society we’re rapidly losing any sense of commonality. It’s notable that the worst public health crisis in a century – a global pandemic that killed millions globally and 26,000 in Canada in little more than a year – did not bind us. The opposite.

Social distancing rules made people leap off the sidewalk. WFH and the quest for space caused urban flight. Masking has erased smiles and normal courtesy. Society’s been upended lately and Covid seems to have exacerbated the divisions among us. Was it a coincidence that the Black Lives Matter movement exploded during the pandemic? Or that in Canada statues of monarchs and our first prime minister have been defaced, toppled and beheaded during the lockdowns?

Quarantines, restrictions, states of emergency, limits on freedom and movement – they have all ratcheted up stress. The pandemic, at the same time, has increased the wealth divide. Jeff Bezos made out like a bandit as WFH turned Amazon into an essential service. People with houses have scored an absolute windfall in the pandemic real estate boom while millions are now locked out. Generational warfare has heightened as the ‘rich’ Boomers are pilloried by the Mills and Zs. MeToo made men evil. Then Covid added race, wealth and ageism to the mix. Suddenly we got to a point where the prime minister (correctly) has a summit on Islamaphobia but (shamefully) refuses to condemn burning churches. The premier of Manitoba, my old pal Brian Pallister, was trashed for saying he’d restore the public art depicting Queen Elizabeth. The Queen of Canada.

A blog comment yesterday mentioned that a group in Nova Scotia has published an “environmental racism” map of that province, purporting to show where pollution or industrialization affects people of colour or disadvantaged groups. I also noticed a real estate board has now condemned the practice of buyers sending letters to sellers in the hope they’d empathize, picking them as the purchasers. That’s potentially discriminatory, the realtors say. In fact at least one US state is banning these ‘love letters’ entirely.

Why?

Because we’re now a woke society that disallows any discrimination based on race, ancestry, color, religion, gender, family status, age, disability, sexual orientation or identity. “It’s incredibly difficult if not impossible to write a love letter and not mention at least one of those protected classes,” says an agent.

Prejudice and hatred are wrong. We all get it. Most people try to be decent. Most succeed.

However, the more we divide ourselves up – whether by racial ancestry or pronouns – the less we have in common. Like history. Or the resolve to help the environment and fix the deficit. Or save our health care from a virus. Or the nation.

Time to stop.

About the picture: “I contacted you back in 2011,” she writes, “when I lived in Nunavut.  I was only at two dogs then. I recently lost my police reject, old boy Ralph to old age (15) As well my husky, Murphy to cancer. Within a few months of each other. You gave us advice in 2011 and we followed it. But my marriage is now done. I still have three dogs, and the husband doesn’t want a divorce. He cheated with a church pastor. I’m currently vacationing at my best friends ranch with our six year old son and our three dogs. They built the dogs a dog run. I work as a carpentry apprentice. I changed careers two years ago-law enforcement to carpentry. Two of my dogs are sick, one with cancer, another with heart disease. Soon I’ll only have one dog. Left to Right: Clancy , Murphy (passed), Sally, Ralph (passed) and Hunter (the only healthy one).”

ScreenAnarchy: Friday One Sheet: BABY, DON'T CRY

This week, in anticipation of the Fantasia Film Festival coming in August, we have the key art for Jesse Dvorak and Zita Bai's troubled teenager coming-of-age drama, with a dollop of magical realism, Baby, Don't Cry. Be it hand painted, or photoshop plug-in (it is getting increasingly difficult to tell the difference any more), the image of the title character, Baby, looking just past the viewer, an eponymous tear crawling down one cheek. The film aims to tell a the story from the perspective of Chinese immigrants in Seattle.  The handsome fox behind her hints at the fantasy elements in the film, without taking away from the emotion or humanity on display. It looks like he is there for moral support, and a bit of mystery. ...

[Read the whole post on screenanarchy.com...]

Colossal: A Visit to Wangechi Mutu’s Nairobi Studio Explores Her Profound Ties to Nature and the Feminine

Kenyan-American artist Wangechi Mutu made history in 2019 when her four bronze sculptures became the first ever to occupy the niches of the Metropolitan Museum of Art’s facade. Stretching nearly seven feet, the seated quartet evokes images of heavily adorned African queens and intervenes in the otherwise homogenous canons of art history held within the institution’s walls.

The monumental figures are one facet of Mutu’s nuanced body of work that broadly challenges colonialist, racist, and sexist ideologies. Now on view at San Francisco’s Legion of Honor is the latest iteration of the artist’s subversive projects: I Am Speaking, Are You Listening?  disperses imposing hybrid creatures in bronze and towering sculptures made of soil, branches, charcoal, cowrie shells, and other organic materials throughout the neoclassical galleries. The figurative works draw a direct connection between the Black female body and ecological devastation as they reject the long-held ideals elevated in the space.

 

No matter the medium, these associations reflect Mutu’s deep respect for and fascination with the ties between nature, the feminine, and African history and culture, a guiding framework that the team at Art21 explores in a recently released documentary. Wangechi Mutu: Between the Earth and the Sky visits the artist’s studio in her hometown of Nairobi and dives into the evolution of her artwork from the smaller collaged paintings that centered her early practice as a university student in New York to her current multi-media projects that have grown in both scope and scale.

Whether a watercolor painting with photographic scraps or one of her mirror-faced figures encircled with fringe, Mutu’s works are founded in an insistence on the value of all life and the ways the earth’s history functions as a source of knowledge, which she explains:

I truly believe that there’s something about taking these bits and pieces of trees, and animals and completely anonymous but extremely identifiable items and placing them somewhere that draws their energy, wherever they were coming from, whatever they did, whatever molten lava they came out of a million years ago, that is now in my work and that little piece of energy is magnified.

Dive further into Mutu’s practice by watching the full documentary above, and see a decades-long archive of her paintings, sculptures, collages, and other works on Artsy and Instagram.

 

ScreenAnarchy: Popcorn Frights 2021: Second Wave, Ready to Help a Nation Hungry for Horror

If real-life horrors have got you down, Popcorn Frights has you covered, nationwide. Readers in the U.S. may already know that the annual Popcorn Frights Film Festival features a tasty collection of titles to tantalize the good people of Florida -- of which I'm sure there are more than a few -- who enjoy the type of films that we showcase on this site. For their seventh edition, they are reaching out to everyone with good taste who lives in the U.S. -- of which I'm sure there are more than a few. From August 12-19, Popcorn Frights will, as usual, offer a fresh mixture of programming for those who live in Florida and wish to enjoy the theatrical experience. This year, they will showcase...

[Read the whole post on screenanarchy.com...]

ScreenAnarchy: ON THE 3RD DAY: More Images and Posters Celebrate Argentine Horror Flick's World Premiere at Fantasia!

Now that we know that Daniel de la Vega's horror flick On the 3rd Day will have its world premiere at Fantasia next month anticipation is building.    Cecilia and her son embark on a journey. On the third day, she is found wandering alone, not remembering what happened during this time. She is desperately looking for her son and finds herself wrapped in a brutal hunt, carried out by a religious fanatic, whom she faces off against. To her, he's a lunatic. To him, Cecilia is the enemy.   International sales agent Black Mandela Films celebrated this announcement yesterday with a small bounty of posters and images which we'd like to share with you below. We have also included the trailer once more.   ...

[Read the whole post on screenanarchy.com...]

Arduino Blog: Meet your new table tennis coach, a tinyML-powered paddle!

Shortly after the COVID-19 pandemic began, Samuel Alexander and his housemates purchased a ping pong set and began to play — a lot. Becoming quite good at the game, Alexander realized that his style was not consistent with how more professional table tennis players hit the ball, as he simply taught himself without a coach. Because of this, he was inspired to create a smart paddle that uses an integrated IMU to intelligently classify which moves he makes and correct his form to improve it over time. 

Alexander went with the Nano 33 BLE Sense board due to its ease of use and tight integration with TensorFlow Lite Micro, not to mention the onboard 6DOF accelerometer/ gyroscope module. He began by designing a small cap that fits over the bottom of a paddle’s handle and contains all the electronics and battery circuitry. With the hardware completed, it was time to get started with the software.

The Tiny Motion Trainer by Google Creative Lab was employed to quickly capture data from the Arduino over Bluetooth and store the samples for each motion. Once all of the movements had been gathered, Alexander trained the model for around 90 epochs and was able to achieve an impressive level of accuracy. His build log and demonstration video below shows how this smart paddle can be used to intelligently classify and coach a novice player into using better form while playing, and it will be fun to see just how good the model can get. 

The post Meet your new table tennis coach, a tinyML-powered paddle! appeared first on Arduino Blog.

ScreenAnarchy: LILITH Trailer: Indie Horror Anthology Out on Digital July 30th

Here is one for fans of small budget horror flicks and men getting what they deserve.    Lilith is an indie horror anthology of four stories about a demon named Lility "who punishes men for their indiscretions against women". Poignant for our times.    The horror flick stars a number of icons of the genre, Vernon Wells and Felissa Rose for example, and will available on digital platforms on July 30th from Terror Films. Check out the trailer below.    Genre icons Vernon Wells (The Road Warrior, Commando), Felissa Rose (Sleepaway Camp, Return to Sleepaway Camp), Devanny Pinn (House of Manson, The Dawn), and Thomas Haley (Camp Twilight, Blind) star in the highly-anticipated new horror anthology, LILITH.   LILITH, directed by Alex T.Hwang, and also...

[Read the whole post on screenanarchy.com...]

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Origins



Click here to go see the bonus panel!

Hovertext:
Relative dick moves have of course been possible since RNA world.


Today's News:

Colossal: A Dizzying Carpet of Crystals Blankets a Salon in the Royal Palace Amsterdam with Prismatic Patterns

Photo by Benning/Gladcova. All images © Suzan Drummen, shared with permission

The latest installation by Dutch artist Suzan Drummen (previously) masks a stately salon in the Royal Palace Amsterdam with a gleaming carpet of crystals placed in psychedelic swirls. A response to the Golden Age-era architecture, the bright colors of Drummen’s work are intended to clash with the rich, muted hues of the furniture and walls. Because each individual crystal is laid by hand and left unsecured, the labor-intensive process took a team of four nine days to complete.

Equally mesmerizing and disorienting, Drummen’s elaborate installations often rely on a combination of patterns, reflection, and a three-dimensional texture that creates a dizzying effect. Much of her work is informed by the overwhelming amount of information in today’s world that can spark confusion and uncertainty, which she explains:

Phenomena like these alarm me as a person, but as a maker, I’m inspired by that dizzying multiplicity. I‘m interested in things that dazzle us, and in my work, I try to ramp that up. It’s an ongoing quest, with a constant interplay between seriousness, fear, playfulness, and hope. Above all I want it to be vibrant and vital.

Drummen’s piece is on view through October 3 as part of Trailblazers, a group exhibition inviting past recipients of The Royal Award for Modern Painting to show their works within the palace’s halls. Explore a larger collection of the Amsterdam-based artist’s projects on her site and Instagram.

 

The work in progress

Dutch King Willem Alexander and the artist. Photo by Jeroen van der Meyde

new shelton wet/dry: Every day, the same, again

3.jpgOur everyday experience informs us that a human observer is capable of observing one set of physical circumstances at a time. Evidence from psychology, though, indicates that people may have the capacity to make observations of mutually exclusive physical phenomena

All cancers fall into just two categories, according to new research

Viral load is roughly 1,000 times higher in people infected with the Delta variant than those infected with the original coronavirus strain … the researchers report that virus was first detectable in people with the Delta variant four days after exposure,compared with an average of six days among people with the original strain

A longer gap between first and second doses of the Pfizer-BioNTech Covid vaccine makes the body’s immune system produce more infection-fighting antibodies, UK researchers have found. An eight-week gap seems to be the sweet spot for tackling the Delta variant.

BBC investigation based on the experiences of dozens of women reveals concerns about how OnlyFans is structured, managed and moderated

orgasm consistency through sexual intercourse had a stronger influence on orgasm satisfaction and sexual satisfaction than orgasm consistency through oral sex, stimulation by the partner’s hand, or self-stimulation

How many parents regret having children and how it is linked to their personality and health

A Wall Street Journal investigation found that TikTok only needs one important piece of information to figure out what you want

How a baby-faced CEO turned a Farmville clone into a massive Ponzi scheme

First lethal attacks by chimpanzees on gorillas observed

Vasya has 2 sisters more than he has brothers. How many daughters more than sons do Vasya’s parents have? — 77 problems

How many robots does it take to run a grocery store?

HAD TOO USE PARACHUTE LIKE BABY

Penny Arcade: News Post: Digiman

Tycho: Alright, so: let's go down the list of fascinations Gabriel has undertaken during The Plague Years. 1. Simulated Racing Of Cars, which became Real Racing Of Cars2. Building Gunpla, which became watching something like a hundred hours of Gundam And now, improbably, startlingly, Digimon. I grabbed one of the new Digimon X devices just for kicks and it’s pretty cool. I never did the virtual pet thing in the 90’s but so far I’ve kept this little bastard alive for three days. pic.twitter.com/rFaXj6f8R8 — Gabe (@cwgabriel) July 23, 2021 Look at that thing. Have they…

Penny Arcade: Comic: Digiman

New Comic: Digiman

Schneier on Security: Commercial Location Data Used to Out Priest

A Catholic priest was outed through commercially available surveillance data. Vice has a good analysis:

The news starkly demonstrates not only the inherent power of location data, but how the chance to wield that power has trickled down from corporations and intelligence agencies to essentially any sort of disgruntled, unscrupulous, or dangerous individual. A growing market of data brokers that collect and sell data from countless apps has made it so that anyone with a bit of cash and effort can figure out which phone in a so-called anonymized dataset belongs to a target, and abuse that information.

There is a whole industry devoted to re-identifying anonymized data. This was something that Snowden showed that the NSA could do. Now it’s available to everyone.

Schneier on Security: Nasty Windows Printer Driver Vulnerability

From SentinelLabs, a critical vulnerability in HP printer drivers:

Researchers have released technical details on a high-severity privilege-escalation flaw in HP printer drivers (also used by Samsung and Xerox), which impacts hundreds of millions of Windows machines.

If exploited, cyberattackers could bypass security products; install programs; view, change, encrypt or delete data; or create new accounts with more extensive user rights.

The bug (CVE-2021-3438) has lurked in systems for 16 years, researchers at SentinelOne said, but was only uncovered this year. It carries an 8.8 out of 10 rating on the CVSS scale, making it high-severity.

Look for your printer here, and download the patch if there is one.

Ideas: BBC Reith Lectures: Mark Carney, Part Three

In our final episode of Mark Carney’s 2020 BBC Reith Lectures, the economist and former governor of the Banks of England and Canada, focuses on how the ultimate test of a more fair economy will be how it addresses the growing climate crisis. This episode originally aired on February 24, 2021.

Planet Haskell: Magnus Therning: Keeping todo items in org-roam v2

Org-roam v2 has been released and yes, it broke my config a bit. Unfortunately the v1-to-v2 upgrade wizard didn't work for me. I realized later that it might have been due to the roam-related functions I'd hooked into `'before-save-hook`. I didn't think about it until I'd already manually touched up almost all my files (there aren't that many) so I can't say anything for sure. However, I think it might be a good idea to keep hooks in mind if one runs into issues with upgrading.

The majority of the time I didn't spend on my notes though, but on the setup I've written about in an earlier post, Keeping todo items in org-roam. Due to some of the changes in v2, changes that I think make org-roam slightly more "org-y", that setup needed a bit of love.

The basis is still the same 4 functions I described in that post, only the details had to be changed.

I hope the following is useful, and as always I'm always happy to receive commends and suggestions for improvements.

Some tag helpers

The very handy functions for extracting tags as lists seem to be gone, in their place I found org-roam-{get,set}-keyword. Using these I wrote three wrappers that allow slightly nicer handling of tags.

(defun roam-extra:get-filetags ()
  (split-string (or (org-roam-get-keyword "filetags") "")))

(defun roam-extra:add-filetag (tag)
  (let* ((new-tags (cons tag (roam-extra:get-filetags)))
         (new-tags-str (combine-and-quote-strings new-tags)))
    (org-roam-set-keyword "filetags" new-tags-str)))

(defun roam-extra:del-filetag (tag)
  (let* ((new-tags (seq-difference (roam-extra:get-filetags) `(,tag)))
         (new-tags-str (combine-and-quote-strings new-tags)))
    (org-roam-set-keyword "filetags" new-tags-str)))

The layer

roam-extra:todo-p needed no changes at all. I'm including it here only for easy reference.

(defun roam-extra:todo-p ()
  "Return non-nil if current buffer has any TODO entry.

TODO entries marked as done are ignored, meaning the this
function returns nil if current buffer contains only completed
tasks."
  (org-element-map
      (org-element-parse-buffer 'headline)
      'headline
    (lambda (h)
      (eq (org-element-property :todo-type h)
          'todo))
    nil 'first-match))

As pretty much all functions I used in the old version of roam-extra:update-todo-tag are gone I took the opportunity to rework it completely. I think it ended up being slightly simpler. I suspect the the use of org-with-point-at 1 ... is unnecessary, but I haven't tested it yet so I'm leaving it in for now.

(defun roam-extra:update-todo-tag ()
  "Update TODO tag in the current buffer."
  (when (and (not (active-minibuffer-window))
             (org-roam-file-p))
    (org-with-point-at 1
      (let* ((tags (roam-extra:get-filetags))
             (is-todo (roam-extra:todo-p)))
        (cond ((and is-todo (not (seq-contains-p tags "todo")))
               (roam-extra:add-filetag "todo"))
              ((and (not is-todo) (seq-contains-p tags "todo"))
               (roam-extra:del-filetag "todo")))))))

In the previous version roam-extra:todo-files was built using an SQL query. That felt a little brittle to me, so despite that my original inspiration contains an updated SQL query I decided to go the route of using the org-roam API instead. The function org-roam-node-list makes it easy to get all nodes and then finding the files is just a matter of using seq-filter and seq-map. Now that headings may be nodes, and that heading-based nodes seem to inherit the top-level tags, a file may appear more than once, hence the call to seq-unique at the end.

Based on what I've seen V2 appears less eager to sync the DB, so to make sure all nodes are up-to-date it's best to start off with forcing a sync.

(defun roam-extra:todo-files ()
  "Return a list of roam files containing todo tag."
  (org-roam-db-sync)
  (let ((todo-nodes (seq-filter (lambda (n)
                                  (seq-contains-p (org-roam-node-tags n) "todo"))
                                 (org-roam-node-list))))
    (seq-uniq (seq-map #'org-roam-node-file todo-nodes))))

With that in place it turns out that also roam-extra:update-todo-files worked without any changes. I'm including it here for easy reference as well.

(defun roam-extra:update-todo-files (&rest _)
  "Update the value of `org-agenda-files'."
  (setq org-agenda-files (roam-extra:todo-files)))

Hooking it up

The variable org-roam-file-setup-hook is gone, so the the more general find-file-hook will have to be used instead.

(add-hook 'find-file-hook #'roam-extra:update-todo-tag)
(add-hook 'before-save-hook #'roam-extra:update-todo-tag)
(advice-add 'org-agenda :before #'roam-extra:update-todo-files)

Arduino Blog: This Arduino-powered robotic fish swims like the real thing

Biomimicry is often used to take the designs that nature has perfected over a period of millions of years and incorporate them into our own technology. One maker who goes by mcp on YouTube took this idea one step further and created a fish that can swim in the water like the actual creature. By carefully analyzing and studying the patterns a fish makes while it scurries through a lake, he was able to reduce these motions down to just a few joints. 

The body of this DIY robotic fish was constructed from a series of four joints that each contain a single mini servo motor to control their movements. Next, an Arduino Nano was selected as the microcontroller board due to its small size and ample amounts of GPIO pins. In order for the fish to sense if there is an obstacle in the way and avoid it, the device also features “eyes” that utilize IR emitter/receiver pairs.

Once the spine of servo motors was combined with the Arduino and a set of LiPo batteries, mcp slipped over a skin made from a waterproof latex-like material that aids in moving throughout the water. In his video below, the DIY robotic fish can be seen oscillating freely through a bathtub full of water, along with a pool. His device works very well as it generates plenty of forward force to swim wherever it wants while avoiding obstacles. 

The post This Arduino-powered robotic fish swims like the real thing appeared first on Arduino Blog.

Open Culture: Sylvia Plath’s Tarot Cards (Which Influenced the Poems in Ariel) Were Just Sold for $207,000

We celebrated my birthday yesterday: [Ted] gave me a lovely Tarot pack of cards and a dear rhyme with it, so after the obligations of this term are over your daughter shall start her way on the road to becoming a seeress & will also learn how to do horoscopes, a very difficult art which means reviving my elementary math. 

– Sylvia Plath, in a letter to her mother, 28 October 1956

Sylvia Plath’s Tarot cards, a 24th birthday present from her husband, poet Ted Hughes, just went for £151,200 in an auction at Sotheby’s.

That’s approximately £100,000 more than this lot, a Tarot de Marseille deck printed by playing card manufacturer B.P. Grimaud de Paris, was expected to fetch.

The auction house’s description indicates that a few of the cards were discolored —  evidence of use, as supported by Plath’s numerous references to Tarot in her journals.

Recall Tarot’s appearance in “Daddy,” her most widely known poem, and her identification with the Hanging Man card, in a poem of the same name:

By the roots of my hair some god got hold of me.

I sizzled in his blue volts like a desert prophet.

The nights snapped out of sight like a lizard’s eyelid :

A world of bald white days in a shadeless socket.

A vulturous boredom pinned me in this tree.

If he were I, he would do what I did.

This century has seen her collection Ariel restored to its author’s intended order.
The original order is said to correspond quite closely to Tarot, with the first twenty-two poems symbolizing the cards of the Major Arcana.

The next ten are aligned with the numbers of the Minor Arcana. Those are followed by four representing the Court cards. The collection’s final four poems can be seen to reference the pentacles, cups, swords and wands that comprise the Tarot’s suits.

Ariel’s manuscript was rearranged by Hughes, who dropped some of the “more lacerating” poems and added others in advance of its 1965 publication, two years after Plath’s death by suicide. (Hear Plath read poems from Ariel here.)

Daughter Frieda defends her father’s actions and describes how damaging they were to his reputation in her Foreword to Ariel: The Restored Edition.

One wonders if it’s significant that Plath’s Page of Cups, a card associated with positive messages related to family and loved ones, has a rip in it?

We also wonder who paid such a staggering price for those cards.

Will they give the deck a moon bath or salt burial to cleanse it of Plath’s negative energy?

Or is the winning bidder such a diehard fan, the chance to handle something so intimately connecting them to their literary hero neutralizes any occult misgivings?

We rather wish Plath’s Tarot de Marseille had been awarded to Phillip Roberts in Shipley, England, who planned to exhibit them alongside her tarot-influenced poems in a pop up gallery at the Saltaire Festival. To finance this dream, he launched a crowd-funding campaign, pledging that every £100 donor could keep one of the cards, to be drawn at random, with all contributors invited to submit new art or writing to the mini-exhibition: Save Sylvia Plath’s cards from living in the drawers of some wealthy collector, and let’s make some art together!

Alas, Roberts and friends fell  £148,990 short of the winning bid. Better luck next time, mate. We applaud your graciousness in defeat, as well as the spirit in which your project was conceived.

via Lithub

Related Content:

The Artistic & Mystical World of Tarot: See Decks by Salvador Dalí, Aleister Crowley, H.R. Giger & More

Why Should We Read Sylvia Plath? An Animated Video Makes the Case

Hear Sylvia Plath Read 18 Poems From Her Final Collection, Ariel, in 1962 Recording

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Follow her @AyunHalliday.

Sylvia Plath’s Tarot Cards (Which Influenced the Poems in Ariel) Were Just Sold for $207,000 is a post from: Open Culture. Follow us on Facebook and Twitter, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: POW! WOW! Hawaii: The First Decade Exhibition at Bishop Museum

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Photographer Spotlight: Darío Toscano

Open Culture: How The Pink Panther Painted The Mona Lisa’s Smile: Watch the 1975 Animation, “Pink Da Vinci”

Just a little fun to send you into the summer weekend. Above, we present the 1975 animated short, “Pink Da Vinci,” which IMDB frames as follows:

Another battle of the paintbrush between the Pink Panther and a diminutive painter, who this time is Leonardo Da Vinci, painting his masterpiece, the Mona Lisa. The little Da Vinci paints a pouting mouth on the Mona Lisa, but the Pink Panther decides to covertly replace the pout with a smile. When the smile wins the appreciation of an art patron, Da Vinci is enraged and repaints the pout. The Pink Panther repeatedly changes the pout to a smile while the little painter is not looking, and ultimately it is the Pink Panther’s version of the Mona Lisa that hangs in the Louvre.

If this whets your appetite, watch 15 hours of Pink Panther animations here.

Would you like to support the mission of Open Culture? Please consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us continue providing the best free cultural and educational materials to learners everywhere.

Also consider following Open Culture on Facebook and Twitter and sharing intelligent media with your friends. Or sign up for our daily email and get a daily dose of Open Culture in your inbox. 

Related Content 

The Original 1940s Superman Cartoon: Watch 17 Classic Episodes Free Online

Watch 15 Hours of The Pink Panther for Free

Watch La Linea, the Popular 1970s Italian Animations Drawn with a Single Line

How The Pink Panther Painted The Mona Lisa’s Smile: Watch the 1975 Animation, “Pink Da Vinci” is a post from: Open Culture. Follow us on Facebook and Twitter, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Allister Lee

Throughout the pandemic artist Allister Lee has been drawing portraits of people and mailing the art to them. The initial idea was just for it to be a morning warm up and now he’s drawn 100+ people. Each one is waxed pastel on layers of scrapbook paper, trimmed with pinking shears. It was a nice little surprise to open up my mail and find a little drawing of me (image at the bottom of this post), thanks Allister!

 

 

 

 

 

 

 

 

 

Explosm.net: Comic for 2021.07.23

New Cyanide and Happiness Comic

Disquiet: Colin Joyce on the Junto at Hii-mag.com

Major thanks to Colin Joyce, who wrote an article for hii-mag.com about music communities that are built around shared composition prompts. The piece is titled “Prescriptive Art Practice in Sound,” and it features two primary examples. One is Audio Playground (audioplayground.xyz), a project run monthly since last year by Sarah Geis, former artistic director of the excellent Third Coast International Audio Festival. The other is the Disquiet Junto. Joyce approaches the topic through the lens of the classic Oblique Strategies cards of Brian Eno and his late collaborator, the artist Peter Schmidt.

Here’s an excerpt:

Part of the long-running success of the project seems to come from the attention and care that Weidenbaum has applied to the prompts themselves. There are never too many in a row with the same style or bent. Some are more conceptual, like #392 “Compose the national anthem for a fictional country,” while others are more practical, like #336 “Share a piece of music you’re working on in the interest of getting feedback.” Changing up the approach no doubt helps the musicians stay engaged, but it also allows everyone involved to flex different creative muscles, to push themselves in different ways, to always be trying something new. But for Weidenbaum, what’s most important is that people are spurred to action. Whether a prompt deeply resonates with a person or not, the hope is that work gets made in response.

“I think inspiration is overrated,” he says. “I think work is what is important. You can only make music if you make music. You can only paint if you paint. You can only write if you write. In general, you won’t get better at it, or at anything else, unless you do it. And so you do it. I think being inspired really happens in the midst of work, not before the work.”

Weidenbaum’s years of shepherding the project have resulted in a robust and engaged community. The group stays in touch through a Slack channel and a message board, encouraging one another and explaining the processes behind their pieces. It’s heartwarming in a way that feels rare in the currently decentralized state of the internet. So often making music and art can be an isolated process, especially for people who work in forms that might be deemed experimental, but projects like this allow people to connect. They’re able to push themselves but also to get in touch with others who are interested in doing the same. “The single best part of it is the people,” Weidenbaum explains. “I have become aware of so many creative individuals, and had remarkable conversations with so many of them.”

Read (or listen to)the full piece at hii-mag.com.

Disquiet: Disquiet Junto Project 0499: Out of the Landscape

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is the end of the day Monday, July 26, 2021, at 11:59pm (that is, just before midnight) wherever you are. It was posted on Thursday, July 22, 2021.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0499: Out of the Landscape
The Assignment: Record a piece of music in which a sound emerges from a field recording.

Major thanks to the work on Golden Gate Bridge music by Mahlen Morris and Nate Mercereau in inspiring this project’s approach.

This project might prove among the more complicated ones, or I may be mistaken. I’ve written a short version of it, and I’ve written it as a longer, step-by-step procedure.

This is the project in one sentence: Add a subtle sound to a preexisting field recording of a soundscape, have that sound slowly gain prominence, and then let it disappear, leaving nothing but the original field recording behind at the end.

And here is the project as a series of nine steps:

Step 1: The goal is to record a piece of music in which a sound emerges from a field recording of of a soundscape. Please read these instructions through closely before proceeding with the project.

Step 2: Locate a field recording of an environment. It could be urban, rural, industrial, domestic, whatever you might choose. A recording with slight variations over time would be beneficial but isn’t necessary. You should, again, read through the instructions in full before determining what field recording you want to work with. You might use a preexisting one, or record a new one.

Step 3: Select a roughly five-minute, continuous segment of the field recording from Step 2. Set it to fade in at the start and out at the end for about 5 seconds each, so it neither starts nor ends abruptly.

Step 4: Listen closely to the field recording. Play it on repeat a few times and think about its tonality, its component parts.

Step 5: The goal for this project is to now introduce a sound at the very start of the field recording that is imperceptible as being an addition. It should fit in so well that the field recording still sounds like a field recording. Plan for this addition to play for roughly 15 seconds before doing anything further with that sound.

Step 6: Now, around the 15-second mark, have that additional sound very slowly make itself more apparent. By 30 seconds, it should have risen in prominence and stand out and somewhat apart from the original field recording.

Step 7: For almost the entire remainder of the piece, have that additional sound do more. Have it morph and vary, and continue to stand out and apart from the field recording, though make sure the field recording is still audible.

Step 8: Around 45 seconds before the end of the piece, have the additional sound slowly return to its original state, as it was at the opening, when it was indistinguishable from the field recording. By the time the piece is about 30 seconds from the end, it should sound as it did when the piece started.

Step 9: When the piece is 25 or so seconds from the end, suddenly mute the additional sound. It should disappear entirely, so that for those final 25 seconds (well, 20, and then the piece will fade out for the final 5 seconds), we hear the unadorned original field recording for the first time.

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0499” (no spaces or quotation marks) in the name of your tracks.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0499” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation of a project playlist.

Step 3: Upload your tracks. It is helpful but not essential that you use SoundCloud to host your tracks.

Step 4: Post your track in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0499-out-of-a-landscape/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #DisquietJunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Note: Please only post one track per project. If you choose to post more than one, and do so on SoundCloud, please let me know which you’d like added to the playlist. Thanks.

Additional Details:

Deadline: This project’s deadline is the end of the day Monday, July 26, 2021, at 11:59pm (that is, just before midnight) wherever you are. It was posted on Thursday, July 22, 2021.

Length: The length of your finished track is up to you. Around five minutes is recommended.

Title/Tag: When posting your tracks, please include “disquiet0499” in the title of the tracks, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: It is always best to set your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 499th weekly Disquiet Junto project — Out of the Landscape (The Assignment: Record a piece of music in which a sound emerges from a field recording) — at: https://disquiet.com/0499/

Major thanks to the work on Golden Gate Bridge music by Mahlen Morris and Nate Mercereau in inspiring this project’s approach.

More on the Disquiet Junto at: https://disquiet.com/junto/

Subscribe to project announcements here: https://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co: https://llllllll.co/t/disquiet-junto-project-0499-out-of-a-landscape/

There’s also a Disquiet Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

Colossal: Take a Swing Around ‘Par Excellence Redux,’ a Mini Golf Course of Playable Artworks at Elmhurst Art Museum

All images courtesy of the Elmhurst Art Museum, shared with permission

Now open at the Elmhurst Art Museum is Par Excellence Redux, a miniature golf course featuring a widely varied collection of playable artworks. Curated by Colossal’s founder and editor-in-chief Christopher Jobson as part of an open call, the two-part course pays homage to the wildly popular Par Excellence that opened in 1988 at the School of the Art Institute of Chicago. The designs range from a challenging optical illusion to a maze-like castle with the potential for a hole-in-one to Annalee Koehn’s fortune-telling piece first shown 33 years ago in the initial exhibition.

Chicago sculptor Michael O’Brien conceived of the original Par Excellence, which opened to lines down the block and subsequently sold out daily. It was recognized nationally in The New York Times, Wall Street Journal, and Chicago Tribune, among others, and went on tour throughout Illinois before returning to Chicago as a rebranded commercial project called ArtGolf, which was located at 1800 N. Clybourn in Lincoln Park on the site that’s now occupied by Goose Island Brewery. Although artist-designed golf courses are shown at a variety of Midwest museums—you can see versions at the Walker Art Center, The Sheldon, and the Nelson-Atkins Museum of ArtPar Excellence is widely regarded as the first.

The Front 9, which runs through September 26, features artists Julie Cowan, Current Projects, Andrea Jablonski & Stolatis Inc., Annalee Koehn, Latent Design, Jesse Meredith, Gautam Rao, Robin Schwartzman & Tom Loftus (aka A Couple of Putts), and the museum’s Teen Art Council. Open October 13, 2021, to January 2, 2022, The Back 9  shows work from artists Wesley Baker, KT Duffy, Eve Fineman, Joshua Kirsch, Annalee Koehn, Vincent Lotesto, Joshua Lowe, Jim Merz, David Quednau, and Liam Wilson & Anna Gershon.

Try your hand at the first nine holes by heading over to Elmhurst’s site to book a tee time, and remember that Colossal Members get 25% off.

 

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: The carry trade

Today this pathetic blog brings you another episode of its wildly incorrect “First World Financial Problems.”

Forget about climate change, massive flooding, fires, BLM, BIPOCs, reconciliation, Gen Z angst, cops vs encampments, ODs, the deficit, LGBTQ+ and that damn virus Delta thingy. Life needs more Greta Van Fleet and, yes, questions like this one from Audrey…

As a devout reader of your blog–thank you!–I have a question.

My husband and I recently retired and so have our friends. We’re both selling houses in Toronto and moving to BC. Due to hard work and smart investing, we each have enough money to buy a decent house with enough money left over to live comfortably.

Our friends (Bay Street finance guy) claims it’s best to buy his house for cash, which is what he’s doing.

My husband and I could also do that but we instead opted to take out the largest mortgage we could (just before quitting work!) to buy our house, assuming the difference between a cheap mortgage these days and the expected return of leaving more money invested in the better way to go.

Who’s right?

Well, that’s easy. The Bay Street dude is foolish, emotionally needy or has w-a-y more money than he requires. These days you can still load up on five-year mortgage money at less than 2%. Meanwhile a balanced & diversified, Non-Cowboy® portfolio delivered 15% in 2019, over 7.5% in 2020 and is in double-digits so far this year. Moreover, the reopening trade has likely just begun.

Periods of economic contraction and recession are historically short (18 months or less) while those of expansion and growth are long (5-7 years). In 2021 we’re just coming out of the pandemic misery with the vax only now starting to flow around the world. Mr. Market (and this blog) saw it coming. We knew pandemics are always temporary, and financial assets have been delivering fat returns as we shunt into the Roaring Twenties.

Conversely, the monetary/CB stimulus and rivers of cash the pandemic brought are ending. Already the Bank of Canada has stopped buying mortgage-backed securities, unwound repo positions and slashed its weekly bond-buying by 60%. It was these things which helped goose residential real estate prices – along with the urban flight and WFH which will also be unwinding.

Here’s some evidence. Look what happened to real estate prices in Vancouver, for example, when Covid hit and the CB crashed rates and started throwing money around…

Source: Teranet-National Bank; Wolfstreet.com; GreaterFool.ca

Conclusion: financial markets will outperform housing markets. So why would you use your own funds that can earn 6% or 8% or 12% when TNL@TB will give you bags of money at two per cent? Why not lock in a cheap home loan for five years while your portfolio grows by forty of fifty per cent during that time, then use the gains there to pay down the mortgage upon renewal?

Or, of course, one could create a tax-deductible mortgage. Simple. Banker boy could buy the digs with cash, arrange a HELOC for 65% of its value at prime plus a half (3%) and invest the funds in the financial portfolio. That would yield full deductibility on the line of credit cost, assuming that interest-only payments were made. (Ignore those who tell you investing in a balanced and diversified non-registered portfolio of various asset types does not qualify for claiming interest expense. In the real world, they’re wrong.)

The point of this is evident, Audrey, and you get it. Thanks to the slimy little pathogen, borrowed money is still incredibly cheap. Thanks to Pfizer and Moderna, we’re entering into a period of growth, recovery and expansion. It’s called the carry trade. That’s when you access funds at a low rate and invest them for a higher return, or use cheap money to gain a desired asset while deploying capital to create more wealth.

Besides, the party’s over for housing. In 2022 the Bank of Canada will begin raising rates slowly as its bond-buying activities come to a complete halt. The economy will normalize, workplaces will repopulate and investor interest will shift back to urban properties. What happened between March of 2020 and, oh, last month, will not be repeated in this generation. Thus, buying real estate with cash – unless you have bushels of it – is an emotional choice. Not a practical one.

Are you sure the guy was a Bay Streeter? Maybe he was in marketing, or compliance. They’re just weird.

About the picture: “Say hi to my coast to coast dogs,” write Ian and Abril, “Lennon (7-year-old Sheltie from PEI) and Abbey (1-year-old Mini Aussie from Bellingham, WA)! They love life in Vancouver and are perfectly content in their rented condo across from Queen Elizabeth Park in Vancouver. Thanks for all your teachings over the years! Made the switch from retail advisor to managing our own B&D portfolio in 2013 and are grateful we did. Lennon’s actually making the face he makes when I say ‘Investors Group’ in this pic.  Or wait, maybe Abbey is…”

Colossal: Precise Compositions by Daniel Rueda and Anna Devís Turn Architecture into Playful Portraits

All images © Daniel Rueda and Anna Devís, shared with permission

Valencia-based duo Anna Devís and Daniel Rueda (previously) add a playful twist to mundane settings and architectural backdrops. Whether flaring a skirt into a wide, cheesy grin, posing to prop up a facade’s stripes, or gripping the tail of a balloon that looks like a tethered sun, their minimal compositions turn geometric elements and open spaces into theatrical sets ripe with humor and joy.

Devís tells Colossal that each narrative-driven image is the result of extensive planning that begins with an initial sketch, involves pairing a concept and location, and later constructing the props. They don’t use any photo-editing software, meaning that every shot is precisely composed on-site with natural lighting, a process she explains:

We carefully set the stage in real life using all sorts of everyday objects, colorful papers, matching outfits, and tons of natural light. At first glance, one would probably think that most of our images are not very difficult to capture because of their modest appearance. But, with the passing years, we’ve learned that achieving this level of simplicity is really, really complicated.

In the coming months, the duo plans to travel to various locales for photoshoots— “there are a lot of beautiful spaces where we’d love to tell a story, but we haven’t figured it out yet,” Devís says—and are in the process of working on a forthcoming book and a few exhibitions. You can find an extensive archive on both Devís’s and Rueda’s Instagrams, and buy prints on their site.

 

Ideas: BBC Reith Lectures: Mark Carney, Part Two

2020 BBC Reith Lecturer Mark Carney continues his lecture series entitled, ‘How We Get What We Value.’ In this episode, the former bank governor focuses on the 2008 financial crisis and the ongoing impact of the pandemic. This episode originally aired on February 23, 2021.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Wow!



Click here to go see the bonus panel!

Hovertext:
This comic is only pessimistic if you are a human.


Today's News:

TOPLAP: International Conference on Live Coding 2021, Valdivia, Chile

ICLC ZAL 2021 Evoluciones Convergentes / Convergent Evolutions International Conference on Live Coding 2021 Valdivia, Chile ZAL 15-17 Diciembre | 2021 / December 15-17 | 2021 More info at: https://iclc.toplap.org/2021/ ZAL is for the IATA Code for Valdivia City The conference will be hosted online – Soon details on the virtual presentation format 2021 Call […]

Open Culture: The Epic of Gilgamesh, the Oldest-Known Work of Literature in World History

You’re probably familiar with The Epic of Gilgamesh, the story of an overbearing Sumerian king and demi-god who meets his match in wild man Enkidu. Gilgamesh is humbled, the two become best friends, kill the forest guardian Humbaba, and face down spurned goddess Ishtar’s Bull of Heaven. When Enkidu dies, Gilgamesh goes looking for the only man to live forever, a survivor of a legendary pre-Biblical flood. The great king then tries, and fails, to gain eternal life himself. The story is packed with episodes of sex and violence, like the modern-day comics that are modeled on ancient mythology. It is also, as you may know, the oldest-known work of literature on Earth, written in cuneiform, the oldest-known form of writing.

This is one version of the story. But Gilgamesh beaks out of the tidy frame usually put around it. It is a “poem that exists in a pile of broken pieces,” Joan Acocella writes at The New Yorker, “in an extremely dead language.”

If Gilgamesh were based on a real king of Ur, he would have lived around 2700 BC. The first stories written about him come from some 800 years after that time, during the Old Babylonian period, after the last of the Sumerian dynasties had already ended. The version we tend to read in world literature and mythology courses comes from several hundred years later, notes the Metropolitan Museum of Art’s Ira Spar:

Some time in the twelfth century B.C., Sin-leqi-unninni, a Babylonian scholar, recorded what was to become a classic version of the Gilgamesh tale. Not content to merely copy an old version of the tale, this scholar most likely assembled various versions of the story from both oral and written sources and updated them in light of the literary concerns of his day, which included questions about human mortality and the nature of wisdom…. Sin-leqi-unninni recast Enkidu as Gilgamesh’s companion and brought to the fore concerns about unbridled heroism, the responsibilities of good governance, and the purpose of life. 

This so-called “Standard Babylonian Version,” as you’ll learn in the TED-Ed video at the top by Soraya Field Fiorio, was itself only discovered in 1849 — very recent by comparison with other ancient texts we regularly read and study. The first archaeologists to discover it were searching not for Sumerian literature but for evidence that proved the Biblical stories. They thought they’d found it in Nineveh, in the excavated library of King Ashurbanipal, the oldest library in the world. Instead, they discovered the broken, incomplete tablets containing the story of Gilgamesh and Utnapishtim, who, like Noah from the Hebrew Bible, built an enormous boat in advance of a divinely ordered flood. The first person to translate the passages was so excited, he stripped off his clothes.

The flood story wasn’t the knock-down proof Christian scholars hoped for, but the discovery of the Gilgamesh epic was even more important for our understanding of the ancient world. What we know of the story, however, was already edited and redacted to suit a millennia-old agenda. The Epic of Gilgamesh “explains that Gilgamesh, although he is king of Uruk, acts as an arrogant, impulsive, and irresponsible ruler,” Spar writes. “Only after a frustrating and vain attempt to find eternal life does he emerge from immaturity to realize that one’s achievements, rather than immortality, serve as an enduring legacy.”

Other, much older versions of his story show the mythical king and his exploits in a different light. So how should we read Gilgamesh in the 21st century, a few thousand years after his first stories were composed? You can begin here with the TED-Ed summary and Crash Course in World Mythology video further up. Dig much deeper with the lecture above from Andrew George, Professor of Babylonian at the University of London’s School of Oriental and African Studies (SOAS).

George has produced one of the most highly respected translations of Gilgamesh, Acocella writes, one that “gives what remains of Sin-leqi-unninni’s text” and appends other fragmentary tablets discovered in Baghdad, showing how the meaning of the cuneiform symbols changed over the course of the millennia between the Old Babylonian stories and the “New Babylonian Version” of the Epic of Gilgamesh we think we know. Hear a full reading of Gilgamesh above, as translated by N.K. Sanders.

Related Content: 

Hear The Epic of Gilgamesh Read in its Original Ancient Language, Akkadian

20 New Lines from The Epic of Gilgamesh Discovered in Iraq, Adding New Details to the Story

World Literature in 13 Parts: From Gilgamesh to García Márquez

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

The Epic of Gilgamesh, the Oldest-Known Work of Literature in World History is a post from: Open Culture. Follow us on Facebook and Twitter, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

The Universe of Discourse: The convergents of 2x

Take some real number and let its convergents be . Now consider the convergents of . Sometimes they will include , sometimes only some of these.

For example, the convergents of and are

$$ \begin{array}{rlc} \pi & \approx & \color{darkblue}{3},&&& \color{darkblue}{\frac{22}{7}}, & \color{darkblue}{\frac{333}{106}}, && \color{darkblue}{\frac{355}{113}}, & \color{darkblue}{\frac{103993}{33102}}, && \frac{104348}{33215}, & \color{darkblue}{\frac{208341}{66317}}, & \ldots \\ 2\pi & \approx & \color{darkblue}{6}, & \frac{19}{3}, & \frac{25}{4}, & \color{darkblue}{\frac{44}{7}}, & \color{darkblue}{\frac{333}{53}}, & \frac{377}{60}, & \color{darkblue}{\frac{710}{113}}, & \color{darkblue}{\frac{103393}{16551}}, & \frac{312689}{49766}, && \color{darkblue}{\frac{416682}{66317}}, & \ldots \end{array}
$$

Here are the analogous lists for and :

$$ \begin{array}{rlc} \frac12{1+\sqrt{5}}& \approx & 1, & 2, & \color{darkblue}{\frac32}, & \frac53, & \frac85, & \color{darkblue}{\frac{13}8}, & \frac{21}{13}, & \frac{34}{21}, & \color{darkblue}{\frac{55}{34}}, & \frac{89}{55}, & \frac{144}{89}, & \color{darkblue}{\frac{233}{144}}, & \frac{377}{233}, &\frac{610}{377} , & \color{darkblue}{\frac{987}{610} }, & \ldots \\ 1+\sqrt{5} & \approx & & & \color{darkblue}{3}, &&& \color{darkblue}{\frac{13}4}, &&& \color{darkblue}{\frac{55}{17}}, &&& \color{darkblue}{\frac{233}{72}}, &&& \color{darkblue}{\frac{987}{305}}, & \ldots \end{array} $$

This time all the convergents in the second list are matched by convergents in the first list. The number is notorious because it's the real number whose convergents converge the most slowly. I'm surprised that converges so much more quickly; I would not have expected the factor of 2 to change the situation so drastically.

I haven't thought about this at all yet, but it seems to me that a promising avenue would be to look at what Gosper's algorithm would do for the case and see what simplifications can be done. This would probably produce some insight, and maybe a method for constructing a number so that all the convergents of are twice those of .

Planet Haskell: Mark Jason Dominus: The convergents of 2x

Take some real number and let its convergents be . Now consider the convergents of . Sometimes they will include , sometimes only some of these.

For example, the convergents of and are

$$ \begin{array}{rlc} \pi & \approx & \color{darkblue}{3},&&& \color{darkblue}{\frac{22}{7}}, & \color{darkblue}{\frac{333}{106}}, && \color{darkblue}{\frac{355}{113}}, & \color{darkblue}{\frac{103993}{33102}}, && \frac{104348}{33215}, & \color{darkblue}{\frac{208341}{66317}}, & \ldots \\ 2\pi & \approx & \color{darkblue}{6}, & \frac{19}{3}, & \frac{25}{4}, & \color{darkblue}{\frac{44}{7}}, & \color{darkblue}{\frac{333}{53}}, & \frac{377}{60}, & \color{darkblue}{\frac{710}{113}}, & \color{darkblue}{\frac{103393}{16551}}, & \frac{312689}{49766}, && \color{darkblue}{\frac{416682}{66317}}, & \ldots \end{array}
$$

Here are the analogous lists for and :

$$ \begin{array}{rlc} \frac12{1+\sqrt{5}}& \approx & 1, & 2, & \color{darkblue}{\frac32}, & \frac53, & \frac85, & \color{darkblue}{\frac{13}8}, & \frac{21}{13}, & \frac{34}{21}, & \color{darkblue}{\frac{55}{34}}, & \frac{89}{55}, & \frac{144}{89}, & \color{darkblue}{\frac{233}{144}}, & \frac{377}{233}, &\frac{610}{377} , & \color{darkblue}{\frac{987}{610} }, & \ldots \\ 1+\sqrt{5} & \approx & & & \color{darkblue}{3}, &&& \color{darkblue}{\frac{13}4}, &&& \color{darkblue}{\frac{55}{17}}, &&& \color{darkblue}{\frac{233}{72}}, &&& \color{darkblue}{\frac{987}{305}}, & \ldots \end{array} $$

This time all the convergents in the second list are matched by convergents in the first list. The number is notorious because it's the real number whose convergents converge the most slowly. I'm surprised that converges so much more quickly; I would not have expected the factor of 2 to change the situation so drastically.

I haven't thought about this at all yet, but it seems to me that a promising avenue would be to look at what Gosper's algorithm would do for the case and see what simplifications can be done. This would probably produce some insight, and maybe a method for constructing a number so that all the convergents of are twice those of .

Tea Masters: Hung Shui SiJiChun feedback


This comment from Kelly on FB made my day: 
"I just had the 2020 Hung Shui 'Dong Pian' Oolong from Mingjian sample you sent me and wow! It was gorgeous, the flavour, the relaxed buzz, the smell and experience. My mouth has been watering all day, the taste lingers in the best possible way. I never knew how much I loved Oolongs, thank you for introducing me to a good brew. The stuff I had from other shops was so inferior, and honestly turned me off of Oolongs. So seriously thank you. I hope you're enjoying your summer. Your tea is defiantly helping me enjoy mine."

I'm glad that you've enjoyed this finely roasted Oolong. It's the cheapest in my selection, but it is 20% more expensive than the fresh Si Ji Chun Dong Pian Oolong that is the unroasted version. That's because Hong Pei, roasting, is an art that requires skill, time and therefore has a price. And those who compare the fresh Dong Pian and its roasted version will discover the transformative power of roasting. At first, this roasting process helped to preserve the flavours of the leaves during the long transportation of the tea during the Qing dynasty (1644-1911). And it was also a way to refine and add depth to the taste. 

Note 1: these 2 Oolongs (fresh and roasted SiJiChun) are also included in the TeaMaster's sampler for beginners.

Note 2: The online boutique remains open this summer. I might just take a week or so off in August.

Open Culture: Discover The Grammar of Ornament, One of the Great Color Books & Design Masterpieces of the 19th Century

In the mid-17th century, young Englishmen of means began to mark their coming of age with a “Grand Tour” across the Continent and even beyond. This allowed them to take in the elements of their civilizational heritage first-hand, especially the artifacts of classical antiquity and the Renaissance. After completing his architectural studies, a Londoner named Owen Jones embarked upon his own Grand Tour in 1832, rather late in the history of the tradition, but ideal timing for the research that inspired the project that would become his legacy.

According to the Victoria and Albert Museum, Jones visited “Italy, Greece, Egypt and Turkey before arriving in Granada, in Spain to carry out studies of the Alhambra Palace that were to cement his reputation.”

He and French architect Jules Goury, “the first to study the Alhambra as a masterpiece of Islamic design,” produced “hundreds of drawings and plaster casts” of the historical, cultural, and aesthetic palimpsest of a building complex. The fruit of their labors was the book Plans, Elevations, Sections and Details of the Alhambra, “one of the most influential publications on Islamic architecture of all time.”

Published in the 1840s, the book pushed the printing technologies of the day to their limits. In search of a way to do justice to “the intricate and brightly colored decoration of the Alhambra Palace,” Jones had to put in more work researching “the then new technique of chromolithography — a method of producing multi-color prints using chemicals.” In the following decade, he would make even more ambitious use of chromolithography — and draw from a much wider swath of world culture — to create his printed magnum opus, The Grammar of Ornament.

With this book, Jones “set out to reacquaint his colleagues with the underlying principles that made art beautiful,” write Metropolitan Museum of Art curator Femke Speelberg and librarian Robyn Fleming. “Instead of writing an academic treatise on the subject, he chose to assemble a book of one hundred plates illustrating objects and patterns from around the world and across time, from which these principles could be distilled.” To accomplish this he drew on his own travel experiences as well as resources closer at hand, including “the museological and private collections that were available to him in England, and the objects that had been on display during the Universal Exhibitions held in London in 1851 and 1855.”

The Grammar of Ornament was published in 1856, emerging into a Britain “dominated by historical revivals such as Neoclassicism and the Gothic Revival,” says the V&A. “These design movements were riddled with religious and social connotations. Instead, Owen Jones sought a modern style with none of this cultural baggage. Setting out to identify the common principles behind the best examples of historical ornament, he formulated a design language that was suitable for the modern world, one which could be applied equally to wallpapers, textiles, furniture, metalwork and interiors.”

Indeed, the patterns so lavishly reproduced in the book soon became trends in real-world design. They weren’t always employed with the intellectual understanding Jones sought to instill, but since The Grammar of Ornament has never gone out of print (and can even be downloaded free from the Internet Archive), his principles remain available for all to learn — and his painstakingly artistic printing work remains available for all to admire — even in the corners of the world that lay beyond his imagination.

You can purchase a complete and unabridged color edition of The Grammar of Ornament online.

Related Content:

The Complex Geometry of Islamic Art & Design: A Short Introduction

Explore the Beautiful Pages of the 1902 Japanese Design Magazine Shin-Bijutsukai: European Modernism Meets Traditional Japanese Design

A Beautiful 1897 Illustrated Book Shows How Flowers Become Art Nouveau Designs

The Bauhaus Bookshelf: Download Original Bauhaus Books, Journals, Manifestos & Ads That Still Inspire Designers Worldwide

Every Page of Depero Futurista, the 1927 Futurist Masterpiece of Graphic Design & Bookmaking, Is Now Online

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

Discover The Grammar of Ornament, One of the Great Color Books & Design Masterpieces of the 19th Century is a post from: Open Culture. Follow us on Facebook and Twitter, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Photographer Spotlight: Li Hui

Open Culture: When David Bowie Played Andy Warhol in Julian Schnabel’s Film, Basquiat

Many actors have played Andy Warhol over the years, but not as many as you might think. Crispin Glover played him in The Doors, Jared Harris played him in I Shot Andy Warhol, Guy Pearce played him in Factory Girl, and Bill Hader played him in Men in Black III, but with a twist: he is actually an agent who is so bad as his cover role as an artist, he’s “painting soup cans and bananas, for Christ sakes!” On television John Cameron Mitchell has acted the Warhol role in Vinyl, and Evan Peters briefly portrayed him in American Horror Story: Cult.

But you might suspect our favorite Warhol would be the one acted by David Bowie in Julian Schnabel’s 1996 Basquiat, the biopic of the Black street artist who was taken into the art world fold by Warhol, and wound up collaborating with him in last works by both artists. Jeffrey Wright plays Basquiat in one of his earliest roles.

Now, you might watch this scene from Basquiat above (and another below) and say, well, that’s just mostly Bowie. But I would say, yes, that’s kind of the point. Andy Warhol is an enigmatic figure, a legend to many, a man who hid behind a constructed persona; David Bowie is too. When one plays the other, a weird sort of magic happens. Fame leaks into fame. Many actors might do better with the mannerisms or the voice, but the charisma…that is all Bowie. After singing about the painter back in 1972, Bowie finally collapsed their visions together in the art of film, where reality and fantasy meet and meld.

Around this time in the mid 1990s, Bowie was very much a part of the New York/London art scene. He was on the editorial board of Modern Painters magazine and interviewed Basquiat director (and artist) Julian Schnabel, Tracey Emin, Damien Hirst, and Balthus. A conceptual artist-slash-serial killer became one of the main characters of his overlooked 1995 Eno collaboration Outside. He was both a collector and an artist, which we’ve focused on before. And he was thinking about the new world opening up because of the internet. Bowie’s artist brain saw the possibilities and the dangers, and also the raw capitalist potential. He offered shares in himself as an IPO in 1997 and released a single as Tao Jones Index, three puns in one. Bowie never predicted the idiocy of the NFT, but he certainly would have laughed wryly at it.

In this Charlie Rose interview to promote Basquiat, Bowie and Schnabel discuss the role of Warhol, the role of art, and the reality of the art world.

“It was more of an impersonation, really,” says Bowie about his Warhol. “That’s how I approach anything.” Of note, however, is how quickly Bowie moves away from discussing himself or the film to talk about larger issues of art and commerce. Bowie does admit that he and Schnabel disagree on a lot of things, and you can see it in their body language. But there’s also a huge respect. It’s a fascinating interview, go watch the whole thing.

Bonus: Below watch Bowie meeting Warhol back during the day…

Related Content:

96 Drawings of David Bowie by the “World’s Best Comic Artists”: Michel Gondry, Kate Beaton & More

The Odd Couple: Jean-Michel Basquiat and Andy Warhol, 1986

When David Bowie Launched His Own Internet Service Provider: The Rise and Fall of BowieNet (1998)

Take a Close Look at Basquiat’s Revolutionary Art in a New 500-Page, 14-Pound, Large Format Book by Taschen

Ted Mills is a freelance writer on the arts who currently hosts the Notes from the Shed podcast and is the producer of KCRW’s Curious Coast. You can also follow him on Twitter at @tedmills, and/or watch his films here.

When David Bowie Played Andy Warhol in Julian Schnabel’s Film, Basquiat is a post from: Open Culture. Follow us on Facebook and Twitter, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

Explosm.net: Comic for 2021.07.22

New Cyanide and Happiness Comic

Disquiet: Orchestra Room

Exterior entrance

Planet Haskell: Well-Typed.Com: Towards system profiler support for GHC

Recently we have been working with Hasura towards making call stack profiling possible using perf and similar tools. For that purpose Andreas Klebinger implemented a prototype branch of GHC which brings us closer to such support.

In this blog post we discuss how we could adapt GHC’s register assignment to make perf usable on Haskell code, what benefits we could gain from this functionality, as well as the trade-offs associated with the change.

We also request feedback from the Haskell community on these trade-offs. Please let us know if you think this would be a good direction for GHC!

The status quo

The best bet for profiling Haskell programs currently is GHC’s profiling mode.

When it works, this mode works really well. However, it has some pretty big caveats:

  • It requires recompilation of the executable and all its libraries.
  • It comes with some overhead, as it changes the representation of heap objects, resulting in a significant cost in both residency and runtime when profiling.
  • Inserting cost centres into your program can interfere with optimizations. This means things like list fusion or inlining behaviour can differ vastly between a profiled and release build.

For some specific use cases one can work around the last point by using a profiling mode called ticky-ticky. Using ticky-ticky can be tricky. It’s not considered a stable feature of GHC and as such can change between versions. While it doesn’t interfere with core optimizations, it comes at a steep runtime cost and, being intended as a tool for language implementors, isn’t very user friendly. This rarely makes ticky-ticky the best option for regular users.

While the above profiling modes can be very helpful they do not cover all situations. What is missing is a low overhead way to characterize performance, especially runtime. Ideally something that works both on Haskell and other languages.

Statistical profilers

System profilers like perf on Linux facilitate such low overhead performance profiling/monitoring. Similar tools exist for all platforms, but for this blog post we will focus on perf as most of these points also apply to other platforms and programs.

perf not only allows users capture performance data about their applications, it’s also for the most part language agnostic. Allowing users to profile applications which are built with multiple languages. Moreover, it allows us to do all that without recompilation, or even without restarting of applications as it can be attached and detached from processes at any point.

It does this by using a kernel interface to take snapshots of an application’s state at a certain rate. These snapshots include things like the current instruction, the values currently in registers and the C call stack.

By taking a large number of samples, we can infer which functions we spend most time in. This already works on compiled Haskell code even today. However, while knowing that most time is spent in, e.g., Data.Map.insert can sometimes be helpful, it’s not always enough. For complex applications what we really want to know is why are we inserting into a map so much, or why are we spending so much time decoding JSON, for example.

While perf can’t answer the “why” question directly, if we can get access to the call stack we can infer much of the control flow, and with domain knowledge of an application this often helps answer the “why” question. This makes call stack information incredibly valuable. This information can also be expressed in powerful visualizations like flame graphs which can make it even more obvious what the performance bottlenecks are in a specific workload.

The bad news is that getting the call stack for Haskell code via perf currently doesn’t work at all.

Why perf doesn’t work for Haskell

perf uses a kernel interface (namely perf_events) to capture machine state (registers, C stack) and make the captured data available to tools (e.g. via perf record). These tools can then analyze the data for monitoring or profiling purposes.

However the interface to capture the machine state makes assumptions which simply don’t apply to Haskell code. The details could fill more than one blog post on their own but in rough terms:

  • Haskell’s execution model currently relies on keeping a (Haskell) stack, separate from the C stack.
  • When compiling to machine code GHC uses both the C stack, and the Haskell stack.
    • The C stack is used for foreign calls, the runtime system, and spilling of temporaries during the execution of a Haskell function.
    • The Haskell stack is used for control flow between pieces of Haskell code (e.g. one Haskell function calling another).
  • Since both stacks are referenced often from machine code, references to both stacks are kept in registers, currently (on x86-64) $rsp for the C stack and $rbp for the Haskell stack.
  • The perf interface allows for capturing only one stack, and it must be pointed to by the $rsp register.

While we could in theory change the kernel interface, it seems unlikely that such a change would be accepted, and it would take even longer for it to filter down to users of Haskell.

This means as-is we can’t capture the Haskell stack using the perf kernel interface. Unless we change some things about how GHC works.

Ben Gamari wrote about a few approaches on how one might get this to work in his DWARF blog post series. Currently we are looking into the “Using native stack pointer register” approach.

Capturing the Haskell stack

What do we need to do in order to make this work?

If we want to reuse tools based on perf we have to capture call stacks, and do it in a way which perf understands.

  • We could introduce an alternative method to capture the Haskell stack. For example, the Haskell runtime could periodically capture and record the current execution stacks (#10915).
  • We could change the register mapping such that $rsp points to the Haskell stack (#8272).

The advantage of the latter is that it should be easier to integrate into already existing tooling. E.g. if perf record can properly record the Haskell stack one would expect perf report and similar to just work.

Working with Hasura we have implemented a branch of GHC for which $rsp points at the Haskell stack during execution, in order to investigate the feasibility of this approach.

While we can run Haskell programs compiled this way there seem to be some issues of perf not fully understanding the captured stacks. We can see that perf has captured the Haskell stacks, but unlike gdb it doesn’t seem to recognize them as call stacks at this point. Before we spend time fixing this we would like feedback from the community to determine whether this change is worthwhile.

How does a call stack look?

Here is a call graph produced by first compiling GHC itself with our branch, then running it inside gdb and interrupting mid compilation to get a callstack:

#0  0x00007ffff4a57a6e in ghc_GHCziCoreziOptziSimplifyziEnv_zdwsimplBinder_info () at compiler/GHC/Core/Opt/Simplify/Env.hs:772
#1  0x00007ffff4a2d258 in rpjH_info () at compiler/GHC/Core/Opt/Simplify.hs:1575
#2  0x00007ffff4a2df70 in rpjI_info () at compiler/GHC/Core/Opt/Simplify.hs:1524
#3  0x00007ffff4a432b8 in rpjN_info () at compiler/GHC/Core/Opt/Simplify.hs:1002
#4  0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#5  0x00007ffff4a4c498 in rpjP_info () at compiler/GHC/Core/Opt/Simplify.hs:998
#6  0x00007ffff4a215a8 in sCVS_info () at compiler/GHC/Core/Opt/Simplify.hs:2944
#7  0x00007ffff4a22e78 in rpju_info () at compiler/GHC/Core/Opt/Simplify.hs:2944
#8  0x00007ffff4a24ac0 in rpjx_info () at compiler/GHC/Core/Opt/Simplify.hs:2799
#9  0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#10 0x00007ffff4a4c498 in rpjP_info () at compiler/GHC/Core/Opt/Simplify.hs:998
#11 0x00007ffff4a215a8 in sCVS_info () at compiler/GHC/Core/Opt/Simplify.hs:2944
#12 0x00007ffff4a22e78 in rpju_info () at compiler/GHC/Core/Opt/Simplify.hs:2944
#13 0x00007ffff4a24ac0 in rpjx_info () at compiler/GHC/Core/Opt/Simplify.hs:2799
#14 0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#15 0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#16 0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#17 0x00007ffff4a2c510 in rpjF_info () at compiler/GHC/Core/Opt/Simplify.hs:1627
#18 0x00007ffff4a25588 in ghc_GHCziCoreziOptziSimplify_simplExpr1_info () at compiler/GHC/Core/Opt/Simplify.hs:1179
#19 0x00007ffff4a2e008 in rpjI_info () at compiler/GHC/Core/Opt/Simplify.hs:1547
#20 0x00007ffff4a28490 in rpjE_info () at compiler/GHC/Core/Opt/Simplify.hs:354
#21 0x00007ffff4a4fa70 in rpjR_info () at compiler/GHC/Core/Opt/Simplify.hs:235
#22 0x00007ffff4a4fb08 in rpjR_info () at compiler/GHC/Core/Opt/Simplify.hs:229
#23 0x00007ffff4a4f3c8 in ghc_GHCziCoreziOptziSimplify_simplTopBindszuzdssimplzubinds_info () at compiler/GHC/Core/Opt/Simplify.hs:229
#24 0x00007ffff4a4ff00 in ghc_GHCziCoreziOptziSimplify_zdwsimplTopBinds_info () at compiler/GHC/Core/Opt/Simplify.hs:218
#25 0x00007ffff49e2348 in ss7d_info () at compiler/GHC/Core/Opt/Pipeline.hs:788
#26 0x00007ffff4a5fa30 in ghc_GHCziCoreziOptziSimplifyziMonad_initSmpl1_info () at compiler/GHC/Core/Opt/Simplify/Monad.hs:95

It’s worth noting that function names get mangled by GHC. ghc_GHCziCoreziOptziSimplifyziEnv_zdwsimplBinder_info is a function in the ghc library from the module GHC.Core.Opt.Simplify.Env named $wsimplBinder. See the ghc wiki for a more detailed explaination.

All the aBcD_info entries are functions introduced by GHC during optimizations via things like float-out. While their name carries no meaning, thankfully debug info still retains the source line where they originated from. If we think it’s helpful this means we can investigate these closer, but for now we will just ignore them.

The “cleaned up” call stack is this:

#0  0x00007ffff4a57a6e in ghc_GHCziCoreziOptziSimplifyziEnv_zdwsimplBinder_info () at compiler/GHC/Core/Opt/Simplify/Env.hs:772
#4  0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#9  0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#14 0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#15 0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#16 0x00007ffff4a32c98 in ghc_GHCziCoreziOptziSimplify_simplExpr2_info () at compiler/GHC/Core/Opt/Simplify.hs:1752
#18 0x00007ffff4a25588 in ghc_GHCziCoreziOptziSimplify_simplExpr1_info () at compiler/GHC/Core/Opt/Simplify.hs:1179
#23 0x00007ffff4a4f3c8 in ghc_GHCziCoreziOptziSimplify_simplTopBindszuzdssimplzubinds_info () at compiler/GHC/Core/Opt/Simplify.hs:229
#24 0x00007ffff4a4ff00 in ghc_GHCziCoreziOptziSimplify_zdwsimplTopBinds_info () at compiler/GHC/Core/Opt/Simplify.hs:218
#26 0x00007ffff4a5fa30 in ghc_GHCziCoreziOptziSimplifyziMonad_initSmpl1_info () at compiler/GHC/Core/Opt/Simplify/Monad.hs:95

Looking at this call stack as someone familiar with GHC’s source code I can immediately see that we are in the simplifier (Simplify.hs), currently in the process of walking over all the top level bindings (simplTopBinds) and simplifying their rhss (simplExpr). Likely recursing into some nested expression.

For GHC none of that is really surprising (most of the time is spent simplifying) but it shows how meaningful stack traces can be despite the limitations mentioned above.

What we can’t see from a single stack trace is how much time we spend in these functions compared to the rest of the compiler. That requires sampling the stack often, ideally via perf, which is our goal but doesn’t work quite yet.

Limitations of call-stack sampling in Haskell

Applied to Haskell there are some limitations we have to consider when looking at call stacks. None of these issues are show stoppers, but it’s good to go into this without expecting perfect C-like call stacks from perf support. The limitations we have to deal with are:

Stack chunks

First of Haskell stacks are chunked. A Haskell stack might not just be one block of memory. Instead a Haskell call stack can be split over multiple chunks of memory. This means even when capturing the Haskell stack we can capture at most one chunk if there are multiple when using perf.

What does this mean in practice? Instead of capturing the full call stack the callstack would look as if it started “in the middle”. This means we might not always be able to assign the cost of a function correctly if we fail to capture all of its callers.

However, we don’t expect this to be a big issue in practice.

  • Chunks are usually large enough to capture all or at least a good deal of control flow.
  • The chunk size can be controlled at program startup, so users can select defaults that allow all of their control flow to be captured.
  • Even partial call stacks can often be quite revealing, and speed up diagnosing of problems significantly.

For these reasons we don’t expect partial stack capture to be a cause for concern.

Tail calls

Haskell makes good use of tail calls, especially compared to C where these are rather rare. This means a function can “disappear” from the call stack. So while f might call g which in turn calls h it’s entirely possible for the call stack to look as if f had called h directly if we capture the call stack during hs execution. This is something to account for when looking at profiling results, but most of the time there are enough non-tail calls for us to still make sense of execution.

Lazy evaluation

Lazy evaluation presents a unique set of challenges when reasoning about performance. In particular, the syntactic order of calls (e.g. from HasCallStack) can differ significantly from the operational order of execution. While this can make call stacks harder to understand, in our experience the essential control flow of an application is usually still apparent.

Downsides of changing register assignment

Changing GHC’s register assignment to accommodate stack sampling is not without some costs. Below we try to characterise these costs based on experiences from our prototype patch.

Runtime performance

This is by far the biggest trade off. Because of differences in the x86-64 instruction encoding, using $rsp (the machine stack register) for the Haskell stack results in larger executables. Larger executables in this case also means more cache misses and slower execution.

For the nofib benchmark suite the changes to key performance metrics were (geometric mean, less is better):

Metric Intel i7-6700K Intel i7-4790K
Code size +3.52% +3.52%
Elapsed (wall) time +0.45% +1.01%

We performed measurements on an Intel i7-6700K (limited to 3.8 GHz to avoid thermal throttling) and an Intel i7-4790K (4.0 GHz, boosts up to 4.4 GHz).

Based on our experience with changes to the NCG in the past, and these two machines in particular, the difference between these measurements is explained well by the fact that the higher clock rate means cache misses have a more severe impact on runtime.

Lower clocked machines are likely to see a even smaller cost for this change, but for simplicity we think it’s fair to say these changes increase runtime on average by 0.5%-1% through their effect on code size.

The question to ask is then: If we already had perf support today, would we sacrifice that support for 0.5-to-1% gain in runtime performance? While that support comes at a slight cost in terms of possible performance, better profiling capabilities mean better optimized Haskell code. As a result we might end up with the average Haskell executable being faster than before after all is said and done!

Implementation complexity

LLVM support is likely to require a more complex implementation than our work so far. LLVM very much likes to control the C stack, so GHC simply using the C-stack register for the Haskell stack is unlikely to just work. At the very least we expect this to require changes to LLVM’s upstream in order to support the new calling convention, as well as changes to GHC’s own LLVM code generation. However, the amount of effort to make this work is not quite clear yet as our focus was to implement this based on the NCG first.

Theoretically it would also be possible to introduce a new “way”, so that GHC would support both the existing and new register assignments. This would perhaps delay the need to implement this for the LLVM backend. But it comes with its own problems. Most of all the fact that we would not be able to profile applications with perf unless they are compiled in the right way to begin with.

Request for feedback

We would like to ask you, the community using Haskell and GHC, if you think pursuing this further is worthwhile.

Is losing a bit of performance worth the benefits we gain? If we already had these benefits would we be willing to give them up for one more sliver of runtime performance?

So if you have opinions on such a change, one way or another, please let us know. We will definitely look at feedback on the ticket as well as on the ghc-devs and haskell-cafe mailing lists. We will also try to keep an eye on the respective Twitter/Reddit posts once they are up.

Planet Haskell: Haskell IDE: 2021-07-22-summer-of-hls

Summer of HLS

Posted on July 22, 2021 by Fendor

Greetings!

This summer I am honoured to be able to work on HLS and improve its ecosystem! The project consists of three sub-goals: Bringing HLS/GHCIDE up-to-speed with recent GHC developments, improving the very delicate and important loading logic of GHCIDE, and bringing a proper interface to cabal and stack to query for build information required by an IDE.

But before you continue, I’d like to thank the people who made this project possible! You know who it is? It is you! Thanks to your donations to Haskell Language Server OpenCollective we accumulated over 3000 USD in the collective, making it possible for me to dedicate the whole summer to working on this project. Additionally, I’d like to thank the Haskell Foundation, with whom the Haskell IDE Team is affiliated, for their generous donation. So, thank you!

Alright, let’s jump into action, what do we want achieve this summer?

GHC and GHCIDE

When GHC 9.0 was released, HLS had no support for it for almost three months and there is no work-in-progress PR for GHC 9.2. A big part of the migration cycle is caused by the module hierarchy re-organisation and changes to GHC’s API. Because of that, it has taken a long time to migrate a large part of the ecosystem.

Haskell Language Server is big. In fact, so big that having every plugin and dependency updated immediately is close to impossible without having an entire team dedicated to upgrading for multiple weeks. However, the main features of the IDE are implemented in GHCIDE (the power-horse of Haskell Language Server). It has fewer features and fewer external dependencies. As such, contrary to HLS, upgrading GHCIDE within a reasonable amount of time after a GHC release is possible. Thus, we want to port GHCIDE to be compatible with GHC 9.2 alpha and lay the foundation to publish GHCIDE to Hackage.

Achieving this goal has clear advantages: an IDE for people who use the latest GHC version. However, it additionally helps developers in migrating their own project to newer GHC versions, since GHCIDE provides a convenient way to discover where an identifier can be imported from.

Multiple Home Units

For a summary and some motivation on what this project is all about see this blog post.

As a TLDR: it stabilises HLS’ component loading logic and furthermore, enables some long-desired features for cabal and stack, such as loading multiple components into the same GHCi session.

Cabal’s Show-Build-Info

If you know of the so-called show-build-info command in cabal, you might chuckle a bit. At least four authors (including myself) have already attempted to merge show-build-info for cabal-install. It was never finished and merged though.

However, implementing this feature would benefit HLS greatly, as it entails that HLS can eagerly load all components within a cabal project, e.g. provide type-checking and goto definitions for all components. In particular, this would help the Google Summer of Code project adding symbolic renaming support to HLS. Symbolic renaming can only properly function if all components of a project are known but currently, for stack and cabal projects, HLS has no way of finding all components and loading them. show-build-info solves this issue for cabal and there are plans to add a similar command for stack.

Summary

I am happy to continue contributing to the HLS ecosystem and excited for this summer! Now I hope you are as excited as me. I will keep you all updated on new developments once there is some presentable progress.


Index

Quiet Earth: Watch: SCAVENGER, a Post Apocalyptic Short Filmed During the Pandemic

A post apocalyptic scavenger's survival routine is interrupted when she discovers a body in the woods.

Scavenger was filmed in Maryland, Fall 2020.

Written, Directed and Cut by: Ben Sottak
Starring: Emmajane Piñeiro-Hoffman as Wendy and Milly Shapiro as Pumpkin (Voice)
Produced by Emmajane Piñeiro-Hoffman and Chris Kenneally
Cinematography by: Steven Russell
Music by: Emmajane Piñeiro-Hoffman
1st AC: Matthew Sullivan
PA: Jesse Aguilar


Watch Scavenger:




Recommended Release: Scavenger

[Continued ...]

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: The match

Suzanne just started a new job. Kudos to her. Change is good. She figures about a decade to go before she winds down and retires.

Of course, retirement used to be the goal. Now it’s just scary. Once upon a time people got defined-benefit pensions, knowing how much income there’d be forever. Now seven in ten have no such thing, and are lucky to land a corporate plan with an unknown outcome. That’s called a ‘defined contribution’ pension, or DC.

Suzanne has one of those now. “In a few weeks I’ll need to determine if/how I’ll participate in their plan,” she says. “So I was wondering if you’d consider writing a post about how to take advantage of a company-matched RRSP plan while holding a B&D portfolio of low cost ETFs?”

Ask. And receive. Here we go…

First, the basics. Government workers, nurses, cops, firepeople, teachers, power or transit employees and most other public-sector employees still have DB pensions. The majority of private employers, however, can’t afford to run a defined-benefit plan. (In the US even some governments are finding them financially unsustainable.)

With a DB, the benefit (your pension payment) is defined, thus it’s known in advance and will never change. Contributions are made by both employer and employee. You can see how an employer would be on the hook for decades. In an uncertain world this is not something any corporation, public or private, would want – it’s unlimited liability (since people live so damn long).

With a DC plan you know how much you and the company are putting in, but you have no idea what will be there upon retirement. Just like an RRSP – since the money in the plan grows or shrinks with the assets selected. Typically the employer teams up with a financial outfit (the biggies are SunLife and Manulife) and your money goes into their mutual funds. The amount you end up with is determined by fund performance.

Most companies now match worker contributions, fully or in part and up to a certain percentage of income. That’s good. It’s free money, and you’d be daft not to take it. Astonishingly, millions don’t. A recent survey concluded that as much as $3 billion a year is piddled away by people who don’t understand their plans, are too myopic or to broke to make routine contributions.

Seems a third of people who are offered a matched DC plan don’t play. Many make lower incomes and are already stretched with kids and mortgages. Retirement saving just doesn’t figure into their life plans (big mistake). Other people eschew the company match in favour of making their own RRSP contributions and pocketing a tax deduction.

That’s also a fail. Workers get a tax break for the DC plan contributions, and when the employer matches the cash it’s akin to a 100% return on your money. You’d have to be an investing genius to routinely reap that kind of gain in your own plan. So do it. And if you’re not contributing to a company pension plan because you don’t trust your dodgy employer will survive, relax. The funds are protected. And matched contributions from the boss don’t reduce incomes paid to workers. How is this not a win?

Okay, back to S. Here’s her ask…

The company plan offers a 100% RRSP contribution match – on an initial 9% contribution from salary. The fees on the managed funds are obviously higher than low cost etfs but at the outset – it does seem worth it to get the contribution match (up to the maximum), while continuing to contribute outside – in my B&D portfolio, currently at 400,000 in registered/unregistered accounts incl full TFSA and over 50,000 in unused contribution room with a 5-10 year retirement horizon.

What fund allocation strategy makes sense when combined with an existing B&D portfolio? Do I try to choose an equivalent B&D strategy from fund offerings? Or fulfill a specific allocation – like just the safe stuff and rebalance outside this? Then what do I do with the funds once I eventually either change companies or retire – move them to my B&D and take the hit?
No idea.

This is easy. Most DC fund managers – like SunLife – give a choice of assets and most employers have arranged a reduced MER, so the fee hit is diminished (but still high). The best option is to pick one reflecting your own private portfolio mix. If you DIY invest, go with a balanced option with global exposure. If you work with an advisor, s/he should do this job for you (at no charge). If you’ve been a cowboy with your own accounts, pick a safer option in the DC plan. (Bond funds also come with lower mutual fees.)

Now, what happens when you (a) retire, (b) get punted or (c) quit that job?

Your plan should go with you. So contact the fund manager and have it moved into a personal account, typically a LIRA. That’s the same as an RRSP but the funds are ‘locked in’ until you reach retirement age, while you retain full discretion over how the assets are invested. Get into low-cost, liquid ETFs. Depending on your age and the terms of your corporate DC plan you might be able to move the money into a regular RRSP – not locked in. Make sure you don’t choose an annuity, however, since in an era of crazy-low interest rates you’ll be cementing lousy returns for eternity.

In conclusion: if your boss offers a matched DC plan, take it. Contribute fully. Invest sanely. Convert wisely. If you have a DB plan, we don’t want to know about it.

About the picture: “I wanted to share is my pupper Keta,” writes Colin. “Picked her up a year ago from a 2 day drive to Fort Nelson last year. My girl was tied up in a ladies yard, not fed and beaten. Not the first animal to be taken away from her. I got her at 6 months and take her everywhere. Almost a year now and life is very different. She has grown leaps and bounds. Sometimes pine cones can be scary and so can a sudden gust of wind but we are living our best life and thought we would share. This is Keta on her first multi day trip resting in the  alpine wild flowers. Keep up the good fight.”

Ideas: BBC Reith Lectures: Mark Carney, Part One

Mark Carney is the 2020 Reith Lecturer, the BBC’s flagship lecture series. In his lectures entitled, 'How We Get What We Value,' he argues the worlds of finance, economics, and politics have too often prioritized financial values, over human ones. The future depends on reversing that shift. In lecture one, he addresses the changing nature of value — and how we've come to equate 'value' to what is profitable. This episode originally aired on February 22, 2021.

Arduino Blog: Using the MKR IoT Carrier board as a game console

One of the first things many makers try to do when they receive a new piece of cool hardware is write a game for it. This is exactly what Johan Halmén did with his Breakout console that uses the Arduino MKR IoT Carrier board and an MKR1000 to both run and display the game. 

Breakout typically involves moving a paddle horizontally along the bottom of the screen to bounce a ball that can destroy the bricks above it. However, since the carrier board’s color OLED screen is circular, Halmén had to create a different version of this, which he calls “BreakIn.” His game features a bunch of hexagonal tiles in the middle and a paddle that moves around the outside that is controlled by the onboard accelerometer. This lets the player tilt the device to move their paddle quickly and accurately. 

Getting the circular display to work was a bit more of a challenge than a normal square one because coordinates had to be mapped using a bit of trigonometry first. Additionally, figuring out the angle of tilt and the collision geometry took some math as well. But once everything was up and working, the game was very fun to play, as can be seen in Halmén’s demo video below. 

To read more about the code that went into this project, check out its Instructables write-up here.

The post Using the MKR IoT Carrier board as a game console appeared first on Arduino Blog.

Penny Arcade: News Post: Rocket Measuring

Tycho: This space shit is the most thoroughly dunked phenomenon; it comes pre-dunked, in the package. Dunk runs off it, down your arm, until it pools and drips from your elbow. They've managed to make space boring. And then, in the manner of a Moses returning from Mount Sinai, Bezos has returned to these benighted shores with a moral framework around something called "civility" with some tactically deployed love bombs to inoculate them from criticism. When you start talking about this shit, there's an inbuilt Greek chorus of people to chew on your ankle because you were critical in any way…

Penny Arcade: Comic: Rocket Measuring

New Comic: Rocket Measuring

CreativeApplications.Net: Cryptid – Animatronic light sculpture by Michael Candy

Cryptid – Animatronic light sculpture by Michael Candy
Created by Michael Candy, 'Cryptid' is an animatronic light sculpture that uses 18 linear actuators and open source Phoenix hexapod code to walk through a space. As human and robotic, natural and synthetic are increasingly amalgamated, the projects questions whether machines could be considered a subspecies.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Recording



Click here to go see the bonus panel!

Hovertext:
If you had a movie that was just 6 hours of anyone's life it'd be simultaneously too obscene and too boring to screen in theatres.


Today's News:

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Jiab Prachakul

Disquiet: Duets with the Golden Gate Bridge

Perhaps you’ve heard the news about the how the Golden Gate Bridge here in San Francisco, where I live, has taken to singing. Repairs to the bridge led to a unique teachable moment about the physics of sound: high winds cause it to drone mellifluously (or annoyingly, according to some locals, though not me) all around the city. The drone is hard to capture because, by definition, it happens when the winds are themselves making noise. The bridge also sounds different depending on where you are. I’ve posted footage from my backyard, not that my cellphone captured anything remotely like what it is like to stand there. It is truly alien, the thermin of the gods.

Much as nature abhors a vacuum, alien music abhors isolation. And thus the Golden Gate Bridge has drawn to it some local musicians. This isn’t the first track I’ve heard in which someone tries to play along with the bridge, but it’s certainly among the most beautiful. Nate Mercereau, as I learned in a news story in yesterday’s issue of the San Francisco Chronicle, has recorded a four-song EP, Duets, on which he plays live along with the bridge. There’s also a video, shown up above, in which he sits perched in the Marin Headlands with the bridge in the background. As Mercereau told the Chronicle’s Aidin Vaziri, “It’s the largest wind instrument in the world right now.”

The video opens with an extended sequence of the bridge on its own. Nearly a minute passes before Mercereau, eventually seated on a stool behind a battery of pedals, begins to intone slow, aching tones that meld beautifully with the bridge itself. He is careful to keep the playing subtle, quiet. It never threatens to overcome the bridge. Instead, it flows in and out of the underlying hum.

The playing on the Duets EP pushes a little further. On “Duet 1,” the guitar sounds at times almost like a flute. On “Duet 2,” a more full-bodied part suggests some hybrid of violin and saxophone. On “Duet 4,” Mercereau posits drones that sit in contrast with the main source audio. Throughout, the bridge just sings on. Perhaps when Mercereau is done, another musician will take his seat on that stool.

This is the latest video I’ve added to my ongoing YouTube playlist of fine live performance of ambient music. Video originally posted at youtube.com. More from Mercereau at howsorecords.com, instagram.com/natemercereau, and twitter.com/natemercereau.

Jesse Moynihan: Forming 327

Planet Haskell: Tweag I/O: Integrated shrinking in QCheck

In Property Based Testing (PBT) the programmer specifies desirable properties or invariants for the code under test, and uses a test framework to generate random inputs to find counter-examples. For example, “a reversed list has the same size as the original list” which can be written as:

fun l -> List.length (List.reverse l) = List.length l

Imagine this test fails for the list [42; 9079086709; -148; 9; -9876543210]. Does this counter-example fail the test because there are 5 elements? Or because there are negative numbers? Or maybe due to the big numbers? Many reasons are possible.

To help narrow down the cause of test failures, most PBT libraries provide a feature called shrinking. The idea is that once a test fails for a given input, the test engine will try less complex inputs, to find a minimal counter-example. In the example above, if shrinking reduces the minimal failing input to [-1] then the developer will more quickly find the root cause: most likely a problem with negative numbers.

This post discusses a type of shrinking called integrated shrinking in OCaml — this feature has recently been merged into QCheck, and will appear in QCheck2.

Shrinking in QCheck1

In QCheck1, the type of an arbitrary (used to generate and shrink input values) is equivalent to:

type 'a arbitrary = {
  generate : Random.State.t -> 'a;
  shrink : 'a -> ('a -> unit) -> unit
}
  • generate is used to generate random values
  • shrink is used in case of test failure to find a smaller counter-example

If the second argument of shrink is unsettling, you can simply read it as “the test to run again on smaller values”. For example, to aggressively shrink (try all smaller numbers) on an int, one could implement shrink as such:

let shrink bigger run_test =
  for smaller = 0 to bigger -- 1 do
    run_test smaller
  done

For convenience, it is never mandatory in QCheck1 to provide a shrinking function: the shrink field is therefore an option. By default, no shrinking is done.

Problems with manual shrinking

There are several problems with the QCheck1 approach, which we will refer to as manual shrinking.

Invariants

Many generators enforce some invariants, meaning that shrinking must also ensure these invariants. For example, a generator of numbers between 10 and 20, in case of test failure, must not try numbers lower than 10! In practice this usually means duplicating the invariant code in generate and shrink. This repetition is tedious and creates a risk of introducing a discrepancy between the generator and the shrinker, causing shrinking to be unreliable.

Providing shrinking is optional

Because providing the shrinking function is optional to run a QCheck1 test, in practice developers forget or don’t take time to implement it for each generator. As a consequence, when a test fails, developers either need to interrupt their work flow to implement all missing shrinkers, or try to identify the problem without shrinking. Either way, the developer is slowed down.

Feedback loop and return on investment

Writing PBT generators requires investing a bit more work up-front than unit tests, but this investment pays off a few minutes later when you write and run your tests.

In contrast, writing shrinking code up-front can feel frustrating as it might not provide a benefit soon, or at all. This frustration is usually worsened by the fact that most shrinking code looks the same:

  • shrinking a product type (record or tuple) amounts to calling the shrinking functions of each field in sequence
  • shrinking a sum type amounts to pattern matching and then calling the shrinking function of the inner value

QCheck1 arbitraries don’t compose well

As shown above, an arbitrary is a pair of a generator — a function that produces values — and a shrinker — a function that consumes values. As such, it is invariant. This makes composition hard, often even impossible: e.g. if you have an 'a arbitrary and a function 'a -> 'b, it is impossible to obtain a 'b arbitrary while maintaining shrinking:

type 'a arbitrary = {
  generate : Random.State.t -> 'a;
  shrink : 'a -> ('a -> unit) -> unit
}

let map (f : 'a -> 'b) ({generate = generate_a; shrink = shrink_a} : 'a arbitrary) : 'b arbitrary =
  let generate_b random = f (generate_a random) in
  let shrink_b bigger_b run_test_b = shrink_a ??? (fun smaller_a -> run_test_b (f smaller_a)) in
  in {generate = generate_b; shrink = shrink_b}

Notice the ??? placeholder: its type is 'a, but all we have at our disposal is f : 'a -> 'b and bigger_b : 'b. Starting from a value of type 'b is is therefore impossible to generate a value of type 'a here.

In QCheck1, map takes an optional reverse function ~rev:'b -> 'a to fill that ??? placeholder and thus maintain shrinking, but all the problems listed above remain. Also, not all functions can be reversed: e.g. consider the function is_even : int -> bool. Once you have a bool, you can’t get back the original int.

Enter Integrated Shrinking

Rather than having a pair of generator and shrinker, integrated shrinking (a kind of automated shrinking) bakes shrinking directly into the generation process: whenever we generate a value, we also generate its shrunk values, and the shrunk values of those shrunk values, etc. This design is directly inspired by Hedgehog. This effectively gives a tree of values:

type 'a tree = Tree of 'a * 'a tree Seq.t

type 'a arbitrary = Random.State.t -> 'a tree
  • A tree is a root (a single value) and a (lazy) list of sub-trees of shrunk values
  • An arbitrary is a function that takes a random generator and returns a tree

For example, an int arbitrary may generate such a tree:

Root: 3. Children: 0, 1 and 2. Child of 1 is 0. Children of 2 are 0 and 1. Child of 1 is 0.

Notice the use of Seq.t (lazy list) rather than list (strict list): it is unnecessary to generate a shrunk value until it is needed during shrinking.

Consider the (obviously wrong) property “all integers are even”, which can be written in OCaml as fun i -> i mod 2 = 0. Let’s see what happens when this property test runs and the generated integer tree is the one above:

  1. the arbitrary function is called with a random state to generate a tree of values (the one above)
  2. the tree root 3 is used to call the test, and fails (3 is not even)
  3. we now want to shrink 3 to find a smaller counter-example that also fails
  4. the property is tested against the first shrink 0 of 3, and the test succeeds: 0 is not a counter-example, so it is ignored
  5. the property is tested against the second shrink 1 of 3, and the test fails: 1 is a (smaller) counter-example!
  6. the property is tested against the first shrink 0 of 1, and the test succeeds: 0 is not a counter-example, so it is ignored
  7. 1 has no other shrunk value, thus 1 is the smallest counter-example we could find: it is reported to the user

Composing arbitraries

Unlike manual shrinking, integrated shrinking does not consume values, it only produces values. Hence integrated shrinking is covariant (while manual shrinking is invariant). This innocuous difference is what makes integrated shrinking better at composition: e.g. it is now possible to implement the map function from above in QCheck2!

type 'a tree = Tree of 'a * 'a tree Seq.t

type 'a arbitrary = Random.State.t -> 'a tree

let rec map_tree (f : 'a -> 'b) (tree_a : 'a tree) : 'b tree =
  let Tree (root_a, shrinks_a) = tree_a in
  let root_b = f root_a in
  let shrinks_b = Seq.map (fun subtree_a -> map_tree f subtree_a) shrinks_a in
  Tree (root_b, shrinks_b)

let map (f : 'a -> 'b) (generate_a : 'a arbitrary) : 'b arbitrary = fun random ->
  let tree_a = generate_a random in
  map_tree f tree_a

The map_tree function maps f over an 'a tree by mapping f on the root value, and recursively mapping f on all its sub-trees. Then map takes care of wrapping up the function over the random state, and delegates to map_tree.

Caveats

Integrated shrinking is not an absolute improvement over manual shrinking: it comes with limitations. I present below some caveats: you can find a more thorough comparison with manual shrinking on Edsko de Vries’s blog post.

Monotonicity

Integrated shrinking (and in particular, the implementation of composition functions like map, bind, etc.) assumes that all composed functions are monotonically non-decreasing, i.e. that smaller inputs will give smaller outputs. In practice, this is true the vast majority of the time, so this assumption is not too constraining. As a counter-example, consider mapping is_even over a generator of numbers. is_even is not monotonic, no matter the order we decide for true and false: e.g. consider the order true < false: 0 < 1 < 2 but is_even 0 < is_even 1 > is_even 2. Indeed while 3 -> 2 -> 1 -> 0 is a good shrinking of the input, mapping is_even over this shrink tree gives false -> true -> false -> true which is a bad shrinking tree. In that case, it does not make sense to rely on an int arbitrary to build a bool arbitrary.

Shrinking strategy for Algebraic Data Types

Shrinking product types (records and tuples) can be done using various strategies. For example, for a record {a; b} one can:

  • completely shrink the first value before the second (completely shrink a, and then completely shrink b)
  • interleave shrinking of the first and second values (shrink a bit a, then shrink a bit b, then shrink again a bit a, then b, etc.)
  • shrink fields together (shrink both a and b at the same time)
  • etc.

Similarly, shrinking for sum types (variants) can be done using various strategies. For example, for a variant type A of int | B of string one can:

  • consider A smaller than B, thus B shrinks to A before shrinking on the inner string, and A only shrinks on the inner int
  • consider neither A nor B is smaller, thus A only shrinks on the inner int, and B only shrinks on the inner string, but shrinking never “jumps” to another variant
  • etc.

For both product and sum types, no strategy is better than the others. QCheck2 arbitrarily uses the first strategy in each case, which may not be efficient or even desirable. As for monotonicity, in practice this is rarely undesirable.

Finer control over shrinking

To either change the shrinking strategy, or switch to manual shrinking, QCheck2 provides a lower-level API:

val make_primitive : gen : (Random.State.t -> 'a) -> shrink : ('a -> 'a Seq.t) -> 'a arbitrary

Notice how close this is to QCheck1 arbitrary type (except the shrink function no longer needs to call run_test).

Conclusion

While integrated shrinking is not strictly better than manual shrinking (or other kinds of shrinking), my experience is that its benefits largely outweigh its shortcomings. I am thrilled this was merged, and for once I am looking forward to my next failing test!

I want to give a shout out to Simon Cruanes, author and maintainer of QCheck, for his tight collaboration on Integrated Shrinking, from design to review!

Arduino Blog: The Snoring Guardian listens while you sleep and vibrates when you start to snore

Snoring is an annoying problem that affects nearly half of all adults and can cause others to lose sleep. Additionally, the ailment can be a symptom of a more serious underlying condition, so being able to know exactly when it occurs could be lifesaving. To help solve this issue, Naveen built the Snoring Guardian — a device that can automatically detect when someone is snoring and begin to vibrate as an alert. 

The Snoring Guardian features a Nano 33 BLE Sense to capture sound from its onboard microphone and determine if it constitutes a snore. He employed Edge Impulse along with the AudioSet dataset that contains hundreds or even thousands of labeled sound samples that can be used to train a TensorFlow Lite Micro model. The dataset within Edge Impulse was split between snoring and noise, with the latter label for filtering out external noise that is not a snore. With the spectrograms created and the model trained, Naveen deployed it to his Nano 33 BLE Sense as an Arduino library.

The program for the Snoring Guardian gathers new microphone data and passes it to the model for inference. If the resulting label is “snoring,” a small vibration motor is activated that can alert the wearer. As an added bonus, the entire thing runs off rechargeable LiPo batteries, making this an ultra-portable device. You can see a real-time demonstration below as well as read more about this project on Hackster.io.

The post The Snoring Guardian listens while you sleep and vibrates when you start to snore appeared first on Arduino Blog.

Schneier on Security: NSO Group Hacked

NSO Group, the Israeli cyberweapons arms manufacturer behind the Pegasus spyware — used by authoritarian regimes around the world to spy on dissidents, journalists, human rights workers, and others — was hacked. Or, at least, an enormous trove of documents was leaked to journalists.

There’s a lot to read out there. Amnesty International has a report. Citizen Lab conducted an independent analysis. The Guardian has extensive coverage. More coverage.

Most interesting is a list of over 50,000 phone numbers that were being spied on by NSO Group’s software. Why does NSO Group have that list? The obvious answer is that NSO Group provides spyware-as-a-service, and centralizes operations somehow. Nicholas Weaver postulates that “part of the reason that NSO keeps a master list of targeting…is they hand it off to Israeli intelligence.”

This isn’t the first time NSO Group has been in the news. Citizen Lab has been researching and reporting on its actions since 2016. It’s been linked to the Saudi murder of Jamal Khashoggi. It is extensively used by Mexico to spy on — among others — supporters of that country’s soda tax.

NSO Group seems to be a completely deplorable company, so it’s hard to have any sympathy for it. As I previously wrote about another hack of another cyberweapons arms manufacturer: “It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads.” I’d like to say that I don’t know how the company will survive this, but — sadly — I think it will.

Finally: here’s a tool that you can use to test if your iPhone or Android is infected with Pegasus. (Note: it’s not easy to use.)

Arduino Blog: An Arduino-powered micro quadruped that fits in the palm of your hand

Arduino-powered quadruped robots are quite common projects for hobbyists to build once they are a bit more comfortable with embedded systems. One problem with many of the pre-designed quadruped platforms is that they require a lot of time to assemble owing to their large size. This is what inspired Technovation to come up with their own micro quadruped robot, which requires only a fraction of the normal amount of material and hours to construct.

The robot is based around a central chassis that houses the Arduino Uno and sensor shield components, which provide power and signaling to the motors. Underneath this hardware stack are four servos that can rotate to the side and act as hip joints. Lastly, each leg is comprised of two servos to allow for forward motion. 

In order for the Arduino to translate a desired direction into discrete positions for the servo motors, Technovation had to implement a few kinematic equations within the robot’s firmware. These consist of movement functions, which create gaits by specifying where and when each leg should move. Several parameters, including speed, leg length, and even the motion paths themselves, have the ability to be fine-tuned or expanded to add more capabilities.

You can see how this micro quadruped works in its demonstration video here or you can read more about the project in Technovation’s Instructables write-up.

The post An Arduino-powered micro quadruped that fits in the palm of your hand appeared first on Arduino Blog.

Ideas: The Rising Star of Judith Shklar, the skeptical liberal

Why it matters to say ‘cruelty is the worst thing that we do’ according to fans of Judith Shklar, political philosopher. This episode originally aired on January 14, 2021

things magazine: Six underground

The Airstream Funeral Coach, ca 1980s (via Cars That Never Made It) / Vintage Covers. Sci-fi reimagined / beats and samples by Wan-Vox / always worth a visit, Synth History / Eileen Gray’s E-1027 House completes its restoration / entertainingly … Continue reading

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Gold



Click here to go see the bonus panel!

Hovertext:
The take-home question here is whether the fake accent is worse than the economic theory.


Today's News:

Tea Masters: Le thé, l'incompris des séries et de la littérature en Chine


La plupart des séries chinoises télé se passent dans la Chine impériale d'avant 1911, durant des dynasties diverses. La plus célèbre des dynasties classiques pour son intrigue est sans doute celle des '3 Royaumes' (220-280) qui a donné naissance à l'un des canons de la littérature chinoise du même nom. Mais les dynasties Han, Tang, Sung, Ming et Qing sont aussi souvent portées à l'écran. Et parfois, pour des fictions ou des séries de gongfu fantastique, la dynastie n'est pas vraiment définie. L'avantage pour les créateurs de ces histoires campées dans les temps anciens, est de pouvoir échapper à la censure du parti communiste. En effet, comme celui-ci n'y existait pas encore, il n'est pas nécessaire d'y faire référence et de le glorifier.

Story of Yanxi Palace, une rare réussite 
Je ne suis pas un spectateur assidu de ces séries, mais comme elles sont divertissantes et parfois très esthétiques, il m'arrive d'en regarder des bribes, surtout quand on y voit les protagonistes préparer du thé.

2 choses me frappent. Le thé est de plus en plus présent dans ces séries. Et secondo, les anachronismes sont plus que nombreux, presque systématiques. Il est très rare que la manière de préparer le thé corresponde fidèlement à la réalité de l'époque. Les erreurs les plus grossières sont l'emploi de petites théières pour préparer le thé dans les dynasties Ming ou antérieures (or les petites théières ne datent que de la dynastie Qing, 1644-1911). Ou bien de voir le thé en feuilles dans les dynasties antérieures aux Ming. Ou bien de voir des accessoires du commerce d'aujourd'hui dans des scènes de la dynastie Qing...

The Flame's Daughter, Aie, cette bouilloire moderne!

Pourquoi ces anachronismes?
Avant d'envisager un complot, il faut toujours envisage l'ignorance, comme disait Michel Rocard. Et puis, c'est aussi difficile de trouver des accessoires qui imitent fidèlement la réalité des temps très anciens, surtout avec des petits budgets. Pour les Ming et le Qing, on a plus de facilité à trouver des accessoires identiques, et c'est pourquoi les anachronismes sont moins évidents aux yeux des amateurs débutants.

Mais si l'on devait chercher une explication moins innocente, je dirais que si l'on voit parfois des bouilloires, des jarres neuves, c'est souvent de la publicité déguisée. Ce sont des placements de produits similaires à ceux que font les grandes marques dans les films de James Bong (quand il conduit la dernière BMW ou bien visionne une vidéo sur un iPad...). Ce genre de pub se fait beaucoup dans les séries chinoises pour la télé.

Mon autre explication, est qu'il y a un désir de promouvoir le thé comme une activité sociale très distinguée. La connaissance du thé s'accompagne inévitablement d'un grand respect pour ce personnage dans la série. Il démontre ainsi qu'il a plusieurs cordes à son arc. La maitrise du thé, la possession d'accessoires de qualité et de feuilles renommées ajoutent au prestige social du personnage dans l'intrigue. C'est aussi une pub déguisée pour le thé et, avec ses 1,4 milliards d'habitants, cela explique pourquoi les meilleurs thés et accessoires deviennent de plus en plus chers en Chine.

De plus, la pratique du thé chinois est aussi une manière de respecter les traditions ancestrales. Cette explication est peut-être un peu capillotractée, mais, en montrant que le thé n'évolue pas depuis mille ans, ce qui est faux, il y a peut-être aussi un message politique conservateur, opposé au changement...

Même dans les livres de Jin Yong (1924-2018), écrivain de HongKong, il y a des anachronismes. Jin Yong est le grand auteur de cape et d'épée chinois. La traduction de 'Tian Long Ba Bu' s'est achevée récemment et vous trouverez cette histoire en 5 tomes aux Editions You Feng. Je vous recommande sa lecture distrayante à la plage cet été. L'histoire se passe durant la dynastie des Song (960-1279) quand le thé était vert, en poudre et fouetté dans des bols noirs et bu dans ces bols. Malgré ce fait historique, page 263 du tome 5, on peut lire: "Des tables à thé sont disposés en rangs, portant chacune un gaiwan de porcelaine blanche aux dessins bleus". A la page suivante: "Il prend la tasse, ôte le couvercle et renverse le thé ainsi que les feuilles dans sa bouche." Il ne s'agit donc pas de thé vert en poudre, mais de feuilles entières. Passons sur le fait que le personnage (originaire de Tufan) ingère aussi les feuilles, on voit que la préparation est telle qu'on la rencontre encore de nos jours. Or, si on a trouvé quelques porcelaines blanches avec du bleu durant la dynastie Tang (618-907), ce n'est qu'à partir des Yuan (1271-1368) que ce style se répand en Chine. Et le gaiwan n'apparait comment instrument pour préparer le thé que chez les Ming (1368-1644).

Pour Jin Yong, je pense que cette erreur historique provient soit de l'ignorance de l'histoire du thé, soit d'un désir de ne pas rendre la lecture de son roman trop ardue en expliquant comment le thé était fait autrefois. En effet, son récit a beaucoup de rythme et ses descriptions sont rares et brèves.
Gaiwan qinghua, rouge, vert et doré sur la couverte avec décoration de 'grains de riz' translucides

Le meilleur roman classique chinois qui parle avec exactitude du thé, quand il en est question, est Hong Long Meng, le rêve dans le pavillon rouge, de Cao Xueqin. Mais ce chef d'oeuvre du milieu du XVIIIème  siècle est d'un tel niveau que même ses exégètes se trompent quand il s'agit d'expliquer les références au thé et aux accessoires de thé contenus dans ce livre. Il faut savoir que Cao Xueqin fait parti de la famille qui fabriquait les vêtements de l'empereur. Il a fréquenté la cour et est donc très au fait de ce qui se fait de mieux dans la Chine de son époque pour le divertissement des élites. Les références dans ce livre complexe, foisannant de personnages, sont souvent difficiles à comprendre et à interpréter. Aussi, le Hongxue désigne l'étude de ce livre exceptionnel de la littérature chinoise. Or, selon notre étude avec Teaparker, la plupart des explications actuelles sur les thés ou accessoires mentionnés dans ce livre sont inexactes! 

Le thé est donc trop simplifié dans les oeuvres modernes, ou bien il est trop complexe et insasissable dans les oeuvres anciennes, même pour les Chinois! Sa connaissance (ou son ignorance) est donc effectivement un vrai marqueur social en Chine. Il y a les savants, les ignorants et, pire que les ignorants, ceux qui croient savoir et ne tiennent leur savoir que de ce qu'ils ont vu dans les séries actuelles!
Concubine Oolong de Shan Lin Xi, 2020

OCaml Weekly News: OCaml Weekly News, 20 Jul 2021

  1. Writing a REST API with Dream
  2. OTOML 0.9.0 — a compliant and flexible TOML parsing, manipulation, and pretty-printing library
  3. soupault: a static website generator based on HTML rewriting
  4. OCaml 4.13.0, second alpha release
  5. OCamlFormat 0.19.0
  6. OCaml Café: Wed, Aug 4 @ 7pm (U.S. Central)

The Sirens Sound: The Fabulous Indie Bands You Cannot Miss

It will always be a very nice idea for you to add the outstanding indie bands to your playlist. It is because this particular music group definitely has something else that you might not get from the mainstream music these days. So then, you will get the different impressions that will make you fall in love with the indie band so bad. Well, there are actually some fabulous indie bands that you cannot miss at all. Then, do you really want to figure them out? If you do, let’s find out below.

Big Thief
The first fabulous indie band that you have to listen to is Big Thief which can be defined as one of the best indie bands from Brooklyn, New York, USA. Their songs will really amaze you so well especially when you try to listen to its folk rock beats that combined with the electricity touch. The elements of their music will remind you of Bruce Springsteen a little bit as they make him one of their biggest inspirations. In the simple words, it is actually so much recommended for you to have a listen to their album “Capacity” that was released in 2017. In addition to this, you will see that their live performance is definitely a lit. It is because they seem to always give all of their energy so that they can get connected to everyone who is enjoying their music.

Slow Hollows
Furthermore, the second indie band that you cannot ignore is Slow Hollows with some of the notable musicians joining the band. They are like Austin Anderson, Daniel Fox, Jackson Katz, and Aaron Jassenoff who has started making music since 2013. Well, this amazing band has been rising up as they could make a big step in their career. It is all because they successfully stole the attention of the music fans in the country in 2019. In the simple words, they will offer you the sharp rock or post punk music that sounds so cool with the combination of strings and horns. Not only that, the low pitched of the main vocalist of the band is absolutely something else that can increase the dreamy texture of their song just like what you can enjoy from their 2018 album “Lessons For Later”. Based on this, it is clear that you will never regret adding Slow Hollows on your playlist as it is one of the best indie bands that you have to really anticipate.

The post The Fabulous Indie Bands You Cannot Miss first appeared on The Sirens Sound.

Explosm.net: Comic for 2021.07.20

New Cyanide and Happiness Comic

LaForge's home page: Notfallwarnung im Mobilfunknetz + Cell Broadcast (Teil 2)

[excuse this German-language post, this is targeted at the current German public discourse]

Ein paar Ergänzungen zu meinem blog-post gestern.

Ich benutzt den generischen Begriff PWS statt SMSCB, weil SMSCB strikt genommen nur im 2G-System existiert, und nur ein Layer ist, der dort für Notfallalarmierung verwendet wird.

Zu Notfallwarn-Apps

Natürlich sind spezielle, nationale Deutsche Katastrophenschutz-Apps auch nützlich! Aber diese sollten allenfalls zusätzlich angeboten werden, nachdem man erstmal die grundlegende Alarmierung konform der relevanten internationalen (und auch EU-)Standards via Cell Broadcast / PWS realisiert. Man sagt ja auch nicht: Nachrichtensendungen braucht man im Radio nicht mehr, weil man die bereits im Fernsehen hat. Man will auf allen verfügbaren Kanälen senden, und zunächst jene mit möglichst universeller Reichweite und klaren technischen Vorteilen benutzen, bevor man dann zusätzlich auch auf anderen Kanälen alarmiert.

Wie sieht PWS für mich als Anwender aus

Hier scheint es größere Missverständnisse zu geben, wie das auf dem Telefon letztlich aussieht. Ist ja auch verständlich, hierzulande sieht man das nie, ausser man ist zufällig in einem Labor/Basttel-Netz z.B. auf einer CCC-Veranstaltung unterwegs, in der das Osmocom-Projekt mal solche Nachrichten versendet hat.

Die PWS (ETWS, CMAS, WEA, KPAS, EU-ALERT, ...) nachrichten werden vom Telefon empfangen, und dann je nach Konfiguration und Priorität behandelt. Für die USA ist im WEA vorgeschrieben, dass Alarme einer bestimmten Prioritatsklasse (z.B. der Presidential Level Alert) immer zwangsweise zur Anzeige gebracht werden und immer mit einem lauten sirenenartigen Alarmton einhergehen. Es ist sogar explizit verboten, dass der Anwender diese Alarme irgendwo ausstellen, stumm schalten o.ä. kann. Insofern spielt es keine Rolle, ob das Telefon gerade Lautlos gestellt ist, oder es nicht gerade unmittelbar bei mir ist.

Bei manchen Geräten werden die Warnungen sogar mittels einer text2speech-Engine laut über den Lautsprecher vorgelesen, nachdem der Alarmton erscheint. Ob das eine regulatorische Anforderung eines der nationalen System ist, weiss ich nicht - ich habe es jedenfalls bereits in manchen Fällen gesehen, als ich mittels Osmocom-Software solche Alarme in privaten Labornetzen versandt habe.

Noch ein paar technische Details

  • PWS-Nachrichten werden auch dann noch ausgestrahlt, wenn die Zelle ihre Netzanbindung verloren hat. Wenn also z.B. das Glasfaserkabel zum Kernnetz bereits weg ist, aber noch Strom da ist, werden bereits vorher vom CBC (Cell Broadcast Centre) an die Mobilfunkzelle übermittelte Warnungen entsprechend ihrer Gültigkeitsdauer weiter autonom von der Zelle ausgesendet Das ist wieder ein inhärenter technischer Vorteil, der niemals mit einer App erreichbar ist, weil diese erfordert dass das komplette Mobilfunknetz mit allen internen Verbindungen und dem Kernnetz sowie die Internetverbindung vom Netzbetreiber zum Server des App-Anbieters durchgehend funktioniert.

  • PWS-Nachrichten können zumindest technisch auch von Telefonen empfangen werden, die garnicht im Netz eingebucht sind, oder die keine SIM eingelegt haben. Ob dies in den Standards gefordert wird, und/oder ob dies die jeweilige Telefonsoftware das so umsetzt, weiss ich nicht und müsste man prüfen. Technisch liegt es nahe, ähnlich wie das Absetzen von Notrufen, das ja auch technisch in diesen Fällen möglich ist.

Zu den Kosten

Wenn - wie in der idealen Welt - das Vorhalten von Notfallalarmierung eine Vorgabe bereits zum Zeitpunkt der Lizenzvergabe für Funkfrequenzen gewesen wäre, wäre das alles einfach ganz lautlos von Anfang an immer unterstützt gewesen. Keiner hätte extra Geld investieren müssen, weil diese minimale technische Vorgabe dann ja bereits Teil der Ausschreibungen der Betreiber für den Einkauf ihres Equipments gewesen wäre. Zudem hatten wir ja bereits in der Vergangenheit Cell Brodacast in allen drei Deutschen Netzen, d.h. die Technik war mal [aus ganz andern Gründen] vorhanden aber wurde irgendwann weggespart.

Das jetzt nachträglich einzuführen heisst natürlich, dass es niemand eingeplant hat, und dass jeder beteiligte am Markt sich das vergolden lassen will. Die Hersteller freuen sich in etwa wie "Oh, Ihr wollt jetzt mehr als ihr damals beim Einkauf spezifiziert habt? Schön, dann schreiben wir mal ein Angebot".

Technisch ist das alles ein Klacks. Die komplette Entwicklung aller Bestandteile für PWS in 2G/3G/4G/5G würde ich auf einen niedrigen einmaligen sechsstelligen Betrag schätzen. Und das ist die einmalige Investition in der Entwicklung, welche dann über alle Geräte/Länder/Netze umgebrochen wird. Bei den Milliarden, die in Entwicklung und Anschaffung von Mobilfunktechnik investiert wird, ist das ein Witz.

Die Geräte wie Basisstationen aller relevanten Hersteller unterstützen natürlich von Haus aus PWS. Die bauen für Deutschland ja nicht andere Geräte, als jene, die in UK, NL, RO, US, ... verbaut werden. Der Markt ist international, die gleiche Technik steht überall.

Weil man jetzt zu spät ist, wird das natürlich von allen Seiten ausgenutzt. Jeder Basisstationshersteller wird die Hand aufhalten und sagen, das kostet jetzt pro Zelle X EUR im Jahr zusätzliche Lizenzgebühren. Und die Anbieter der zentralen Komponente CBC werden auch branchenüblich die Hand aufhalten, mit satten jährlichen Lizenzgebühren. Und die Consultants werden auch alle die Hand aufhalten, weil es gibt wieder etwas zu Integrieren, zu testen, ... Das CBC ist keine komplexe Technik. Wenn man das einmalig als Open Source entwickeln lässt, und in allen Netzen einsetzt, bekommt man es quasi zum Nulltarif. Aber das würde ja Voraussetzen, dass man sich wirklich mit der Technik befasst, versteht um welch simple Software es hier geht, und dass man mal andere Wege in der Beschaffung geht, als nur mal eben bei seinen existierenden 3 Lieferanten anzurufen, die sich dann eine goldene Nase verdienen wollen.

In der öffentlichen Diskussion wird von 20-40 Millionen EUR gesprochen. Das sind überzogene Forderungen der Marktteilnehmer, nichts sonst. Aber selbst wenn man der Meinung ist, dass man lieber das Geld zum Fenster hinauswerfen will, statt Open Source Alternativen zu [ver]suchen, dann ist auch diese Größenordnung etwas, dass im Vergleich zu den sonstigen Anschaffungs- und Betriebskosten eines Mobilfunknetzes verschwindend gering ist. Ganz zu schweigen von den Folgekosten im Bereich Bergung/Rettung, Personenschäden, etc. die sich dadurch mittelfristig bei Katastrophen einsparen lassen.

Oder anders betrachtet: Wenn sogar das wirtschaftlich viel schwächere Rumänien sich sowas leisten kann, dann wird es wohl auch die Bundesrepublik Deutschland stemmen können.

Ideas: Notes From Utopia: The Arab Spring 10 years on

Ten years ago, the Middle East was in convulsions as protesters attempted revolution in several countries. Looking back, what can we learn from those experiments in human collaboration? This episode aired on January 26, 2021.

Schneier on Security: Candiru: Another Cyberweapons Arms Manufacturer

Citizen Lab has identified yet another Israeli company that sells spyware to governments around the world: Candiru.

From the report:

Summary:

  • Candiru is a secretive Israel-based company that sells spyware exclusively to governments. Reportedly, their spyware can infect and monitor iPhones, Androids, Macs, PCs, and cloud accounts.
  • Using Internet scanning we identified more than 750 websites linked to Candiru’s spyware infrastructure. We found many domains masquerading as advocacy organizations such as Amnesty International, the Black Lives Matter movement, as well as media companies, and other civil-society themed entities.
  • We identified a politically active victim in Western Europe and recovered a copy of Candiru’s Windows spyware.
  • Working with Microsoft Threat Intelligence Center (MSTIC) we analyzed the spyware, resulting in the discovery of CVE-2021-31979 and CVE-2021-33771 by Microsoft, two privilege escalation vulnerabilities exploited by Candiru. Microsoft patched both vulnerabilities on July 13th, 2021.
  • As part of their investigation, Microsoft observed at least 100 victims in Palestine, Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore. Victims include human rights defenders, dissidents, journalists, activists, and politicians.
  • We provide a brief technical overview of the Candiru spyware’s persistence mechanism and some details about the spyware’s functionality.
  • Candiru has made efforts to obscure its ownership structure, staffing, and investment partners. Nevertheless, we have been able to shed some light on those areas in this report.

We’re not going to be able to secure the Internet until we deal with the companies that engage in the international cyber-arms trade.

new shelton wet/dry: Every day, the same, again

How children are spoofing Covid-19 tests with soft drinks

20% of Americans believe the conspiracy theory that microchips are inside the COVID-19 vaccines

18% had Hallux valgus (deformed big-toes) caused, very probably, by wearing overly pointy shoes

6-7% of the general population hear voices that don’t exist

In the six studies we conducted, we consistently reported that clone images elicited higher eeriness than individuals with different faces; we named this new phenomenon the clone devaluation effect.

These kinds of “zero-click” attacks, as they are called within the surveillance industry, can work on even the newest generations of iPhones.

In 1995, on the occasion of the 100th anniversary of cinema, the Vatican compiled a list of 45 “great films”

Dead Startup Toys

This beach does not exist

Penny Arcade: News Post: Double Reverse Irony

Tycho: If you aren't sort of a dork, the fact that cheat prevention software and emulation is in a problematic state currently and the impact it might have on a device like the Steam Deck isn't broadly understood. Before we entered The Hell Dimension a year and a half ago, my main work machine was Linux Mint, and I really liked it. I had to stretch my mind taut over a hoop and embroider it meticulously to resolve a couple issues I had, and I got way smarter, but I'm by no means an expert. The official Known Issues section in the Proton documentation goes into what they consider best…

Michael Geist: The Law Bytes Podcast, Episode 95: Mark Phillips on the Federal Court of Canada’s Right to be Forgotten Ruling

Several years ago, the Privacy Commissioner of Canada filed a reference with the federal court in a case that was billed as settling the “right to be forgotten” privacy issue. That may have overstated matters, but the case did address a far more basic question on whether the privacy law applies to Google’s search engine service when it indexes webpages and presents search results in response to searches of an individual’s name. Earlier this month, the federal court released its decision, concluding that it does.

Mark Phillips is a Montreal-based lawyer practicing primarily in the areas of privacy, access to information, civil litigation, and administrative law in both Quebec and Ontario. His client – whose identity remains confidential under order of the court – filed the complaint that ultimately led to federal court decision. He joins the Law Bytes podcast to talk about the case, where the right to be forgotten stands under Canadian law, and what might come next.

The podcast can be downloaded here, accessed on YouTube, and is embedded below. Subscribe to the podcast via Apple Podcast, Google Play, Spotify or the RSS feed. Updates on the podcast on Twitter at @Lawbytespod.

Show Notes:

Federal Court of Canada Reference decision

Credits:

Your Morning, Canadians Could Soon Have the Right to be Forgotten

The post The Law Bytes Podcast, Episode 95: Mark Phillips on the Federal Court of Canada’s Right to be Forgotten Ruling appeared first on Michael Geist.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Perfect Life



Click here to go see the bonus panel!

Hovertext:
Let us now fight about the plural of emoji.


Today's News:

Explosm.net: Comic for 2021.07.19

New Cyanide and Happiness Comic

MattCha's Blog: Read The Room Like A Good Drug Dealer

As the COVID restrictions on gatherings lift and as I start hosting a few tea gatherings of my own, I thought I would share some sage advice on tea gatherings. The best advice I can give you on the topic of serving tea to a group or hosting group tea sessions is to always read the room like a good drug dealer…

I suppose I have lived a rather interesting life.  When I was quite young I was into a lot of meditation and hung out with and partied with a bunch of artists, hairstyles, and even some fringe academic types.  Sometimes I would even mingle with the upper elites of political and business.  Most likely for the diversity I could bring to the conversation of the night, I think.  One such night I was invited to attend a very lavish wedding attended by the who’s who.  It went well and was a lot of fun but as the wedding then descended into pub hopping and house partying we realized that my good friend had left her purse at the pub and when we returned to retrieve it it was obviously no where to be found.  It put such a damper on an otherwise epic night.

We retreated back to a friends condo to try to salvage whatever we could of the fun time we had up until that point. We had a few drinks but couldn’t turn the sour mood.  Then all of a sudden the mood started to lift, we were starting to laugh and joke again, dance again, engage in interesting conversation, transcending conversation really as our senses exploded into a kaleidoscope of passion.  We partied late into the night - a night we will never forget for its sheer awesomeness.

It turns out later we shockingly found out that we were actually all drugged with MDMA in our drinks when we were trying to drink away our sorrows.  The one who drugged us later told us that, like any good drug dealer, you always got to read the mood of the room.

When serving guests tea this advice is just as relevant.  Read the mood and energy of the room and choose teas which bring the guests energetically to a certain place.  Sometimes the guests are invited for dinner or drinks but sometimes it just turns into tea tasting rather spontaneously. Take them on a journey with the energy of the tea.  Don’t force it.  For me, I don’t hesitate to bring out the Bulang if guests are starting to get lame.  Or a Lao Man E if the conversations are slow and long and boring.  Trust me, the mood will change quickly!  Sometimes I bring out the puerh with tranquilizing Qi if things are getting too intense at a gathering.  Basically, I serve tea for the energy I want to put out there to the guests and implore the guests to go for a ride with me.

Don’t complicate things- serve 2 teas max maybe 3 only if you are hosting a few hard core tea people.  More than 2 teas and the energy gets too muddled and unpredictable with larger groups.  If you are hosting a really big group one carefully selected tea should be chosen wisely that should dictate the mood of the gathering.  If you want them to try more tea- send them home with a sample instead.

Give them only relevant and interesting snippets of info just enough to spark curiosity about the tea and to draw them in.  DO NOT NERD OUT.  If they bite and want to know more, get into the story of the tea and what it personally means to you and what we can expect from it.  Choose teas that you have a personal connection to.

Always send the guests home with doggie bags of samples of the teas you drank.  Sometimes I even toss in full cakes.  This allows the guests to take the experience home with them.  It also allows experienced drinkers to try their hand at steeping it up with their own set up and brewing parameters.  It allows them to develop their own relationship with the tea.  It also allows them a chance to revisit their initial experience you offered them and an opportunity to go deeper with it.

If you follow this advice, every gathering will be a successful tea gathering.

Peace

LaForge's home page: Notfallwarnung im Mobilfunknetz + Cell Broadcast

[excuse this German-language post, this is targeted at the current German public discourse]

In mehrerern Gegenden Deutschlands gab es verheerende Hochwasser, und die Öffentlichkeit diskutiert deshalb mal wieder die gute alte Frage nach dem adäquaten Mittel der Alarmierung der Bevölkerung.

Es ist einfach nur ein gigantisches Trauerspiel, wie sehr die Deutsche Politik und Verwaltung in diesem Punkt inzwischen seit Jahrzehnten sämtliche relevanten Standards verpennt, und dann immer wieder öffentlich durch fachlich falsche und völlig uninformierte Aussagen auffällt.

Das Thema wurde vor dem aktuellen Hochwasser bereits letztes Jahr im Rahmen des sog. WarnTag öffentlich diskutiert. Auch hier von Seiten der öffentlichen Hand ausschliesslich mit falschen Aussagen, wie z.B. dass es bei Cell Broadcast Datenschutzprobleme gibt. Dabei ist Cell Broadcast die einzige Technologie, wo keine Rückmeldung des einzelnen Netzteilnehmers erfolgt, und das Netz nichtmal weiss, wer die Nachricht empfangen hat, und wo dieser Empfang stattgefunden hat. Ganz wie beim UKW-Radio.

Fakt ist, dass alle digitalen Mobilfunkstandards seit GSM/2G, d.h. seit 1991 die Möglichkeit mitbringen, effizient, schnell und datensparsam alle Nutzer (einer bestimmten geographischen Region) mit sogenannten broadcast Nachrichten zu informieren. Diese Technik, in GSM/2G genannt Cell Broacast (oder auch _SMSCB_), unterscheidet sich Grundlegend von allen anderen Kommunikationsformen im Mobilfunknetz, wie Anrufe und herkömmliche SMS (offiziell SMS-PP). Anrufe, SMS und auch mobile Paketdaten (Internet) werden immer für jeden Teilnehmer individuell auf ihm zugewiesenen Funkressourcen übermittelt. Diese Ressourcen sind beschränkt. Es können in keinem Mobilfunknetz der Welt alle Teilnehmer gleichzeitig telefonieren, oder gleichzeitig SMS empfangen.

Stattdessen benutzt Cell Broadcast - wie der Name bereits unmissverständlich klar macht - Einen broadcast, d.h. Rundsendemechanismus. Eine Nachricht wird einmal gesendet, benötigt also nur eine geteilte Ressource auf der Luftschnittstelle, und wird dann von allen Geräten im Empfangsbereich zeitgleich empfangen und dekodiert. Das ist wie UKW-Radio oder klassisches terrestrisches Fernsehen.

Cell Broadcast wurde bereits in den 1990er Jahren von Deutschen Netzbetreibern benutzt. Und zwar nicht für etwas lebensnotwendiges wie die Notfallsignalisierung, sondern für so banale Dinge wie die Liste jener Vorwahlen, zu denen gerade ein vergünstigter "wandernder Ortstarif" Besteht. Ja, sowas gab es mal bei Vodafone. Oder bei O2 wurden über lange Zeit (aus unbekannten Gründen) die GPS-Koordinaten der jeweiligen Basisstation als Cell Broadcast versendet.

In der folgenden (nun fast abgeschalteten) Mobilfunkgeneration 3G wurde Cell Broadcast leicht umbenannt als Service Area Broadcast beibehalten. Schliesslich gibt es ja Länder mit - anders als in Deutschland - funktionierender und kompetenter Regulierung des Telekommunikationsmarktes, und die langjährig bestehenden gesetzlichen Anforderungen solcher Länder zwingen die Netzbetreiber und auch die Ausrüster der Neztbetreiber, neue Mobilfunkstandards so zu entwickeln, dass die gesetzlichen Vorgaben bzgl. der Alarmierung der Bevölkerung im Notfall funktioniert.

Im Rahmen dieser Standardisierung haben eine Reihe von Ländern innerhalb der 3GPP-Standardisierung (zuständig für 2G, 3G, 4G, 5G) sogenannte Public Warning Systems (PWS) standardisiert. Zu diesen gehören z.B. das Japanische ETWAS (Earthquake and Tsunami Warning System), das Koreanische KPAS (Korean Public Alerting System), das US-Amerikanische WEA (Wireless Emergency Alerts, früher bekannt als CMAS) und auch das EU-ALERT mit den nationalen Implementationen NL-ALERT (Niederlande) und UK-ALERT (Großbritannien) sowie RO-ALERT (Rumänien).

Die zahlreichen Studien und Untersuchungen, die zur Gestaltung obiger Systeme und der internationalen Standards im Mobilfunk geführt haben, weisen auch nochmal nach, was sowieso vorher jedem Techniker offensichtlich erscheint: Eine schelle Alarmierung aller Teilnehmer (einer Region) kann nur über einen Broadcast-Mechanismus erfolgen. In Japan war die Zielvorgabe, die Alarmierung in Erdbebenfällen innerhalb von weniger als 4 Sekunden an die gesamte betroffene Bevölkerung zu übertragen. Und das ist mit PWS möglich!

Die relevanten PWS-Standards in 2G/3G/4G/5G bieten jede Menge nützliche Funktionen:

  • Benachrichtigung in bestimmten geographischen Regionen

  • Interoperable Schnittstellen, so dass Netzwerkelemente unterschiedlicher Hersteller miteinander kommunizieren

  • Konfigurierbare Benachrichtigungstexte, nicht nur in der primären Landessprache, sondern auch in mehreren anderen Sprachen, die dann automatisch je nach Spracheinstellung des Telefons wiedergegeben werden

  • Unterschiedliche Schweregrade von Alarmierungen

  • Übermittlung nicht nur im Broadcast, sondern auch im Unicast an jeden Teilnehmer, der gerade in einem Telefongespräch ist, und dessen Telefon gerade währenddessen aus technischen Gründen den Broadcast nicht empfangen würde

  • Unterschied zwischen Wiederholung einer Übertragung ohne Änderung des Inhalts und einer übertragung mit geändertem Inhalt

Es gibt also seit vielen Jahren internationale Standards, wie sämtliche heute eingesetzten Mobilfunktechniken zur schnellen, effizienten und datensparsamen Alarmierung der Bevölkerung eingesetzt werden können.

Es gibt zahlreiche Länder, die diese Systeme seit langem einsetzen. Das US-Amerikanische WEA wurde nach eigenen Angaben seit 2012 bereits mehr als 61.000 Mal benutzt, um Menschen vor Unwetter oder anderen Katastrophen zu warnen.

Sogar innerhalb der EU hat man das EU-ALERT System spezifiziert, welches weitgehend mit dem amerikanischen WEA identisch ist, und auf die gleichen Techniken aufbaut.

Und dann gibt es Länder wie Deutschland, die es seit genauso vielen Jahren vermissen lassen, durch Gesetze oder Vorschriften

  1. die Netzbetreiber zum Betrieb dieser Broadcast-Technologien in ihrem Netz verpflichtet

  2. die Netzbetreiber zur Bereitstellung von standardisierten Schnittstellen gegenüber den Behörden wie Zivilschutz / Katastrophenschutz zu verpflichten, so das diese selbständig über alle Netzbetreiber Warnungen versenden können

  3. die Gerätehersteller z.B. über Vorschriften des FTEG (Gesetz über Funkanlagen und Telekommunikationsendeinrichtungen) zu Verpflichten, die PWS-Nachrichten anzuzeigen

In den USA, dem vermeintlich viel mehr dem Freien Markt und dem Kapitalismus anhängenden System ist all dies der Regulierungsbehörde FCC möglich. In Deutschland mit seiner sozialen Marktwirtschaft ist es anscheinend unmöglich, den Markt entsprechend zu regulieren. Eine solche Regulierung schafft man in Deutschland nur für wirklich wichtige Themen wie zur Durchsetzung der Bereitstellung von Schnittstellen für die Telekommunikationsüberwachung. Bei so irrelevanten Themen wie dem Katastrophenschutz und der Alarmierung der Bevölkerung braucht man den Markt nicht zu regulieren. Wenn die Netzbetreiber kein PWS anbieten wollen, dann ist das einfach so Gottgegeben, und man kann da ja nichts machen.

Falls jemand sich SMSCB und PWS technisch näher ansehen will: In 2019 haben wir im Osmocom-Projekt eine Open Source Implementation des kompletten Systems von BTS über BSC bis zum CBC, sowie der dazwischen befindlichen Protokolle wie CBSP vorgenommen. Dies wurde freundlicherweise durch den Prototype Fund mit EUR 35k finanziert. Ja, so günstig kann man die nötige Technik zumindest für eine einzelne Mobilfunkgeneration entwickeln...

Man kann also in einem selbst betriebenen Labor-Mobilfunknetz, welches auf Open Source Software basiert mehr in Punkt standardkonformer Notfallalarmierung, als die Deutsche Politik, Verwaltung und Netzbetreiber zusammen hinbekommen.

Wir haben in Deutschland Leute, die diese Standards in und auswendig kennen, sogar daran mitgearbeitet haben. Wir haben Entwickler, die diese Standards implementiert haben. Aber wir schaffen es nicht, das auch mal selbst praktisch zu benutzen - das überlassen wir lieber den anderen Ländern. Wir lassen lieber zuerst die ganze Katastrophenalarmierung mittels Sirenen vergammeln, machen den Netzbetreibern keine Vorgaben, entwicklen komische Apps, die Anwender extra installieren müssen, die prinzipbedingt nicht skalieren und beim Test (WarnTag) nicht funktionieren.

Was für eine Glanzleistung für den hochentwickelten Techhologie-Standort Deutschland.

The Shape of Code: Estimating using a granular sequence of values

When asked for an estimate of the time needed to complete a task, should developers be free to choose any numeric value, or should they be restricted to selecting from a predefined set of values (e.g, the Fibonacci numbers, or T-shirt sizes)?

Allowing any value to be chosen would appear to provide the greatest flexibility to make an accurate estimate. However, estimating is an intrinsically uncertain process (i.e., the future is unknown), and it is done by people with varying degrees of experience (which might be used to help guide their prediction about the future).

Restricting the selection process to one of the values in a granular sequence of numbers has several benefits, including:

  • being able to adjust the gaps between permitted values to match the likely level of uncertainty in the task effort, or the best accuracy resolution believed possible,
  • reducing the psychological stress of making an estimate, by explicitly giving permission to ignore the smaller issues (because they are believed to require a total effort that is less than the sequence granularity),
  • helping to maintain developer self-esteem, by providing a justification when an estimate turning out to be inaccurate, e.g., the granularity prevented a more accurate estimate being made.

Is there an optimal sequence of granular values to use when making task estimates for a project?

The answer to this question depends on what is attempting to be optimized.

Given how hard it is to get people to produce estimates, the first criterion for an optimal sequence has to be that people are willing to use it.

I have always been struck by the ritualistic way in which the Fibonacci sequence is described by those who use it to make estimates. Rituals are an effective technique used by groups to help maintain members’ adherence to group norms (one of which might be producing estimates).

A possible reason for the tendency to use round numbers might estimate-values is that this usage is common in other social interactions involving numeric values, e.g., when replying to a request for the time of day.

The use of round numbers, when developers have the option of selecting from a continuous range of values, is a developer imposed granular sequence. What form do these round number sequences take?

The plot below shows the values of each of the six most common round number estimates present in the BrightSquid, SiP, and CESAW (project 615) effort estimation data sets, plus the first six Fibonacci numbers (code+data):

The six most common round number estimates present in various software task estimation datasets, plus the Fibonacci sequence, and fitted regression lines.

The lines are fitted regression models having the form: permittedValue approx e^{0.5 Order} (there is a small variation in the value of the constant; the smallest value for project 615 was probably calculated rather than being human selected).

This plot shows a consistent pattern of use across multiple projects (I know of several projects that use Fibonacci numbers, but don’t have any publicly available data). Nothing is said about this pattern being (near) optimal in any sense.

The time unit of estimation for this data was minutes or hours. Would the equation have the same form if the time unit was days, would the constant still be around 0.5. I await the data needed to answer this question.

This brief analysis looked at granular sequences from the perspective of the distribution of estimates made. Perhaps it makes more sense to base a granular estimation sequence on the distribution of actual task effort. A topic for another post.

Jesse Moynihan: Forming 326

MattCha's Blog: 2021 Essence of Tea Morsels (or Pieces?)

 

This 2021 Essence of Tea Morsels goes for $168.00 for 200g cake or $0.84/g but it was my free gift included with sample purchase.  It’s by far the most unusual of Essence of Tea’s spring releases, maybe one of their most unique offerings ever?  Comprised of small morsels (in North America they usually say “pieces”) of this and morsels of that- must be how it got its name. “It's all from ancient trees, from Yiwu Guoyoulin gardens, Guafengzhai, Bulang and some other single trees.”

Dry leaves smell of very sweet fruity berries and creamy sweetness.

First infusion has a very fruity, hay, woody, mushroom onset with an emerging bitter bland that converts to a very sweet berry, strawberry and peach fruity finish in the long aftertaste.  There is lots of depth of taste and a leaning toward fruity sweetness.  Sticky sandy full mouthcoating with some pungent cool in the deeper throat.  There is lots going on but it tastes nice together.  You can feel a bit of bitter astringency in the empty stomach.  The Qi in the Chest and alerted mind.  Strong energetic Qi feels lots like a bulang energy here.

The second has a medicinal very pungent camphor cooling sweet bread onset where fruity tastes like plum and peach emerge as well as woody-dirt bitter flat taste that reminds me of Bulang. There is a sticky full loose chalky feeling in the mouth with a deep cooling throat and some saliva producing.  There are some fruity tastes like plum and bread that come out later in the aftertaste.  Strong heart pounding Qi- very energetic.

The third infusion has a creamier sweet very pungent medicinal woody camphor onset its very strong and concentrated.  It ends both with a bitter-wood-dirt flat as well as a creamy sweet, some plum, and bread almost apricot.  The full onset really comes on thick and coats the thick sticky-cottony mouth with lots of full and lasting throughout the profile taste.  Strong Qi in the chest and very stimulating on the mind.  It’s a powerful blend.  There is also some bodyfeeling in the arms like a tingling floating thing going on.  Strong but only a touch harsh on my empty stomach.

The 4th is left to cool and has a thick woody syrupy concentrated woody camphor taste with a lot of coolness then turns to a flat-woody-dirt bitter that is totally Bulang.  There is some florals and plum and lots of cool pungent camphor that emerges out of the strong champhor bitter woody taste.  There is a certain sweetness, bread-like, to the taste throughout.  The mouthfeeling is sticky and full with some saliva forming.  Strong and powerful Qi with Heartbeats and limb numbness.



5th infusion has a melon woody coco bitter with a long fruity melon and peach finish within the mild dirt-woody-bitter.  The taste is pretty condensed with a saliva producing effect and fruity coco finish in the mouth.  Nice strong energy in the mind.

6th has a fruitier peach, pear, melon that comes just before and over a long mild woody-dirt-coco-woody bitterness.  The mouithfeeling is sticky almost drying full coating with an open cool throat and some gripping of the throat with a mild saliva producing effect.  Kind of a metallic woody taste on the tongue.

7th infusion has a fruity, medicinal, bitter-dirt-woody taste to it that has a flat bitter medicinal finish and some subtle fruity tastes.  The main taste here is a medicinal woody kind of taste.  The Qi is strong and sedating at the same time now.  Some lesser chest sensations.  Tighter almost dry mouthcoating with slight saliva producing effect.

8th is cooled down and is a very peachy apricot syrupy taste with a woody medicinal menthol camphor taste.  The sweet taste is thick and goes the distance in the taste profile with a melon coolness on the breath.  The mouthfeeling is sticky and cottony and full.  Nice strong Qi feeling.

9th has a melon then woody onset taste with a dry woody coolness on the breath.  There is a creamy sweetness almost mushroom taste that returns and the bitterness is mild kind of coco and medicinal taste.  These last few infusions taste like Yiwu material.  There are nuances of potato and bread.  Nice full sticky feeling with a nice oily texture here.  Nice sedating effect now.

10th is cooled down… there is a almond, syrypy sweetness that comes along with mild dirt-coco bitterness nice long cool coco sweet finish.  Nice relaxing sedating feeling with some Heart racing and alertness.  Limbs feel a bit heavy.  Melon and potato on breath reminds me of Gua Feng Zhi in the blend.

11th has a melon chocolate onset along with a creamy yogurt melon taste.  There is a bland potato taste, woody taste, melon, creamy sweet, dirt-coco, subtle medicinal.  Lots of soft faint flavours in there now as the aftertaste fades to a blander woody dirt-coco.  The mouthfeeling remains dry.

12th is left to cool and sit a while and it is a syrupy bitter woody coco dirt camphor taste.  It feels really full but with a softer mineral melon aftertaste.  The recent material comes out in the aftertaste now and the initial taste is more semi-aged or at least a few years aged. 

13th has a dirt coco woody dirt onset with a light fresh mineral and melon finish.  Almost potato in the finish.  Aftertaste reminds me of Gua Feng Zhi and the onset reminds me of bulang/ Menghai.  There is a long fresh aftertaste.  These infusions taste like a semi-aged Bulang initial taste with a young Gua Feng Zhi or Yiwu finish.  It leaves me wondering if this is the composition on the blend?

14th has a fresh fruity onset with a bitter-dirt-coco that follows.  The taste is pretty stable the last handful of infusions despite this being a blend.  Soft dry sandy mouthfeeling now.  With a calming Qi sensation without any bodyfeeling now.

I end up mug steeping out the rest and its pretty similar to the last few infusions.  More bitter and more gripping dry mouthfeeling than previously but not too too strong.  I can feel it in my guts a tinny bit and the Qi is strongly energizing with strong Heart beats.  Vigorous energy when steeping strongly for sure.

The overnight steeping is mainly a nice melon woody sweet taste.  Still surprisingly lots of taste in there.  This one has some good stamina, I think.

Overall, this puerh is unusual, it has some really good tastes in there and even better energy.  I think a bit of powerful semi-aged Bulang that is showcased as the infusions get more bitter and then as the session progresses it has the bitter strength up front with the younger, fresher, Yiwu/ Gua Feng Zhi sedating and very sweet finish.  Often the polarities present simultaneously for an unusual but full effect.  The young and a bit older material along with the Menghai and Yiwu blend is interesting but really disjointed.  I think this has great material in it and would probably taste better with a bit of age on it and give the time for the desperate pieces to come together more harmoniously.  It has interesting movement throughout the session and uniq combination of energizing then sedating Qi.  I find puerh like this hard to purchase before I can clearly see how it’s going to come together.  One thing is for certain this puerh has good bones.  Would love to try this one again in 10 years!

Alex’s (Tea Notes) Tasting Notes

Peace

Schneier on Security: REvil is Off-Line

This is an interesting development:

Just days after President Biden demanded that President Vladimir V. Putin of Russia shut down ransomware groups attacking American targets, the most aggressive of the groups suddenly went off-line early Tuesday.

[…]

Gone was the publicly available “happy blog” the group maintained, listing some of its victims and the group’s earnings from its digital extortion schemes. Internet security groups said the custom-made sites ­- think of them as virtual conference rooms — where victims negotiated with REvil over how much ransom they would pay to get their data unlocked also disappeared. So did the infrastructure for making payments.

Okay. So either the US took them down, Russia took them down, or they took themselves down.

new shelton wet/dry: Every day, the same, again

9.jpgInside the PAC operation that raised millions by impersonating Donald Trump — billions of robocalls […] almost all of which feature recorded soundbites of public statements from Trump

Outdoor Wedding: 6 Fully Vaccinated Infected With Covid-19 Delta Variant and 8 fully vaccinated healthcare workers caught COVID-19 at a Vegas pool party

Facebook is ditching plans to make an interface that reads the brain — Some scientists said it was never possible anyway.

Facebook fired 52 people from 2014 to August 2015 over abusing access to user data, a new book says. One person used data to find a woman he was traveling with who had left him after a fight, the book says.

Gabriela Buendia tries to take every precaution when it comes to information about her patients. The therapist uses encrypted video apps for virtual sessions, stores charts in HIPAA-compliant applications and doesn’t reach out to her clients on social media. She said she never saves her patients’ phone numbers on her smartphone either. So it came as a shock when Buendia found out recently that Venmo, a digital payment app that patients increasingly use to pay their therapists, was displaying her entire contact list publicly.

Dogecoin creator likens cryptocurrencies to a scam run by “powerful cartel” to benefit the rich

“Acrobat” - the initial M, which opens the word “martyr”, in a liturgical manuscript (11th century) from the Limoges monastery of St. Marcial.

CreativeApplications.Net: KHM Netze Open and KH門 Festival

KHM Netze Open and KH門 Festival
The Academy of Media Arts Cologne (KHM) hosts an annual diploma exhibition KHM Open from 21st to 25th July 2021 in various locations in Cologne as well as online. In addition to diploma works, there are interventions in hybrid forms, led by students and seminars at KHM. “Netze Open” is an online exhibition platform by…

things magazine: The eternal circle of nostalgia

Dispersing the Capsules: ‘Although the Nakagin Capsule Tower, an icon of Metabolist architecture in Tokyo designed by Kisho Kurokawa, could not be saved, plans are afoot to remove the capsules, refurbish them, and donate them to museums in and beyond … Continue reading

bit-player: Three Months in Monte Carlo

As a kid I loved magnets. I wanted to know where the push and pull came from. Years later, when I heard about the Ising model of ferromagnetism, I became an instant fan. Here was a simple set of rules, like a game played on graph paper, that offered a glimpse of what goes on inside a magnetic material. Lots of tiny magnetic fields spon­taneously line up to make one big field, like a school of fish all swimming in the same direction. I was even more enthusiastic when I learned about the Monte Carlo method, a jauntily named collection of mathematical and computational tricks that can be used to simulate an Ising system on a computer. With a dozen lines of code I could put the model in motion and explore its behavior for myself.

Over the years I’ve had several opportunities to play with Ising models and Monte Carlo methods, and I thought I had a pretty good grasp of the basic principles. But, you know, the more you know, the more you know you don’t know.

In 2019 I wrote a brief article on Glauber dynamics, a technique for analyzing the Ising model introduced by Roy J. Glauber, a Harvard physicist. In my article I presented an Ising simulation written in JavaScript, and I explained the algorithm behind it. Then, this past March, I learned that I had made a serious blunder. The program I’d offered as an illustration of Glauber dynamics actually imple­mented a different procedure, known as the Metropolis algorithm. Oops. (The mistake was brought to my attention by a comment signed “L. Y.,” with no other identifying information. Whoever you are, L. Y., thank you!)

A few days after L. Y.’s comment appeared, I tracked down the source of my error: I had reused some old code and neglected to modify it for its new setting. I corrected the program—only one line needed changing—and I was about to publish an update when I paused for thought. Maybe I could dismiss my goof as mere carelessness, but I realized there were other aspects of the Ising model and the Monte Carlo method where my understanding was vague or superficial. For example, I was not entirely sure where to draw the line between the Glauber and Metropolis procedures. (I’m even less sure now.) I didn’t know which features of the two algorithms are most essential to their nature, or how those features affect the outcome of a simulation. I had homework to do.

Since then, Monte Carlo Ising models have consumed most of my waking hours (and some of the sleeping ones). Sifting through the literature, I’ve found sources I never looked at before, and I’ve reread some familiar works with new under­standing and appreciation. I’ve written a bunch of computer programs to clarify just which details matter most. I’ve dug into the early history of the field, trying to figure out what the inventors of these techniques had in mind when they made their design choices. Three months later, there are still soft spots in my knowledge, but it’s time to tell the story as best I can.

This is a long article—nearly 6,000 words. If you can’t read it all, I recommend playing with the simulation programs. There are five of them: 1, 2, 3, 4, 5. On the other hand, if you just can’t get enough of this stuff, you might want to have a look at the source code for those programs on GitHub. The repo also includes data and scripts for the graphs in this article.


Let’s jump right in with an Ising simulation. Below this paragraph is a grid of randomly colored squares, and beneath that a control panel. Feel free to play. Press the Run button, adjust the temperature slider, and click the radio buttons to switch back and forth between the Metropolis and the Glauber algorithms. The Step button slows down the action, showing one frame at a time. Above the grid are numerical readouts labeled Magnetization and Local Correlation; I’ll explain below what those instruments are monitoring.

The model consists of 10,000 sites, arranged in a \(100 \times 100\) square lattice, and colored either dark or light, indigo or mauve. In the initial condition (or after pressing the Reset button) the cells are assigned colors at random. Once the model is running, more organized patterns emerge. Adjacent cells “want” to have the same color, but thermal agitation disrupts their efforts to reach accord.

The lattice is constructed with “wraparound” boundaries: This arrangement is also known as periodic boundary conditions. Imagine infinitely many copies of the lattice laid down like square tiles on an infinite plane.All the cells along the right edge are adjacent to those on the left side, and the top and bottom are joined in the same way. Topologically, the structure is a torus, the surface of a doughnut. Although the area of the surface is finite, you can set off in a straight line in any direction and keep going forever, without falling off the edge of the world. Because of the wraparound boundaries, all the cells have exactly four nearest neighbors; those in the corners and along the edges are just like those in the interior and require no special treatment.

When the model is running, changing the temperature can have a dramatic effect. At the upper end of the scale, the grid seethes with activity, like a boiling cauldron, and no feature survives for more than a few milliseconds. In the middle of the temperature range, large clusters of like-colored cells begin to appear, and their lifetimes are somewhat longer. When the system is cooled still further, the clusters evolve into blobby islands and isthmuses, coves and straits, all of them bounded by strangely writhing coastlines. Often, the land masses eventually erode away, or else the seas evaporate, leaving a featureless monochromatic expanse. In other cases broad stripes span the width or height of the array.

Whereas nudging the temperature control utterly transforms the appearance of the grid, the effect of switching between the two algorithms is subtler.

  • At high temperature (5.0, say), both programs exhibit frenetic activity, but the turmoil in Metropolis mode is more intense.
  • At temperatures near 3.0, I perceive something curious in the Metropolis program: Blobs of color seem to migrate across the grid. If I stare at the screen for a while, I see dense flocks of crows rippling upward or leftward; sometimes there are groups going both ways at once, with wings flapping. In the Glauber algorithm, blobs of color wiggle and jiggle like agitated amoebas, but they don’t go anywhere.
  • At still lower temperatures (below about 1.5), the Ising world calms down. Both programs converge to the same monochrome or striped patterns, but Metropolis gets there faster.

I have been noticing these visual curiosities—the fluttering wings, the pulsating amoebas—for some time, but I have never seen them mentioned in the literature. Perhaps that’s because graphic approaches to the Ising model are of more interest to amateurs like me than to serious students of the underlying physics and mathematics. Nevertheless, I would like to understand where the patterns come from. (Some partial answers will emerge toward the end of this article.)

For those who want numbers rather than pictures, I offer the magnetization and local-correlation meters at the top of the program display. Magnet­ization is a global measure of the extent to which one color or the other dominates the lattice. Specifically, it is the number of dark cells minus the number of light cells, divided by the total number of cells:

\[M = \frac{\blacksquare - \square }{\blacksquare + \square}.\]

\(M\) ranges from \(-1\) (all light cells) through \(0\) (equal numbers of light and dark cells) to \(+1\) (all dark).

Local correlation examines all pairs of nearest-neighbor cells and tallies the number of like pairs minus the number of unlike pairs, divided by the total number of pairs:

\[R = \frac{(\square\square + \blacksquare\blacksquare) - (\square\blacksquare + \blacksquare\square) }{\square\square + \square\blacksquare + \blacksquare\square + \blacksquare\blacksquare}.\]

Again the range is from \(-1\) to \(+1\). These two quantities are both measures of order in the Ising system, but they focus on different spatial scales, global vs. local. All three of the patterns in Figure 1 have magnetization \(M = 0\), but they have very different values of local correlation \(R\).

Figure 1Three patterns with magnetization zero.


The Ising model was invented 100 years ago by Wilhelm Lenz of the University of Hamburg, who suggested it as a thesis project for his student Ernst Ising. It was introduced as a model of a permanent magnet.

A real ferromagnet is a quantum-mechanical device. Inside, electrons in neighboring atoms come so close together that their wave functions overlap. Under these circum­stances the electrons can reduce their energy slightly by aligning their spin vectors. According to the rules of quantum mechanics, an electron’s spin must point in one of two directions; by convention, the directions are labeled up and down. The ferromagnetic interaction favors pairings with both spins up or both down. Each spin generates a small magnetic dipole moment. Zillions of them acting together hold your grocery list to the refrigerator door.

In the Ising version of this structure, the basic elements are still called spins, but there is nothing twirly about them, and nothing quantum mechanical either. They are just abstract variables constrained to take on exactly two values. It really doesn’t matter whether we name the values up and down, mauve and indigo, or plus and minus. (Within the computer programs, the two values are \(+1\) and \(-1\), which means that flipping a spin is just a matter of multiplying by \(-1\).) In this article I’m going to refer to up/down spins and dark/light cells interchangeably, adopting whichever term is more convenient at the moment.

As in a ferromagnet, nearby Ising spins want to line up in parallel; they reduce their energy when they do so. This urge to match spin directions (or cell colors) extends only to nearest neighbors; more distant sites in the lattice have no influence on one another. In the two-dimensional square lattice—the setting for all my simulations—each spin’s four nearest neighbors are the lattice sites to the north, east, south, and west (including “wraparound” neighbors for cells on the boundary lines).

If neighboring spins want to point the same way, why don’t they just go ahead and do so? The whole system could immediately collapse into the lowest-energy configuration, with all spins up or all down. That does happen, but there are complicating factors and countervailing forces. Neighborhood conflicts are the principal complication: Flipping your spin to please one neighbor may alienate another. The counter­vailing influence is heat. Thermal fluctuations can flip a spin even when the change is energetically unfavorable.

The behavior of the Ising model is easiest to understand at the two extremities of the temperature scale. As the temperature \(T\) climbs toward infinity, thermal agitation com­pletely overwhelms the cooperative tendencies of adjacent spins, and all possible states of the system are on an equal footing. The lattice becomes a random array of up and down spins, each of which is rapidly changing its orientation. At the opposite end of the scale, where \(T\) approaches zero, the system freezes. As thermal fluctuations subside, the spins sooner or later sink into the orderly, low-energy, fully magnetized state—although “sooner or later” can stretch out toward the age of the universe.

Things get more complicated between these extremes. Experiments with real magnets show that the transition from a hot random state to a cold magnetized state is not gradual. As the material is cooled, spontaneous magnetization appears suddenly at a critical temperature called the Curie point (about 840 K in iron). Lenz and Ising wondered whether this abrupt onset of magnetization could be seen in their simple, abstract model. Ising was able to analyze only a one-dimensional version of the system—a line or ring of spins—and he was disappointed to see no sharp phase transition. He thought this result would hold in higher dimensions as well, but on that point he was later proved wrong.

The idealized phase diagram in Figure 2 (borrowed with amendments from my 2019 article) outlines the course of events for a two-dimensional model. To the right, above the critical temperature \(T_c\), there is just one phase, in which up and down spins are equally abundant on average, although they may form transient clusters of various sizes. Below the critical point the diagram has two branches, leading to all-up and all-down states at zero temperature. As the system is cooled through \(T_c\) it must follow one branch or the other, but which one is chosen is a matter of chance.

Figure 2Illustrative graph of magnetization vs temperature, showing bifurcation where temperature drops below critical point.In this figure I have corrected another error in my 2019 article. The original graph showed magnetization increasing along what looks like a quadratic curve, with \(M\) proportional to the square root of \(T_C - T\). In fact magnetization is proportional to the eighth root, which makes the onset more abrupt.

The immediate vicinity of \(T_c\) is the most interesting region of the phase diagram. If you scroll back up to Program 1 and set the temperature near 2.27, you’ll see filigreed patterns of all possible sizes, from single pixels up to the diameter of the lattice. The time scale of fluctuations also spans orders of magnitude, with some structures winking in and out of existence in milliseconds and others lasting long enough to test your patience.

All of this complexity comes from a remarkably simple mechanism. The model makes no attempt to capture all the details of ferromagnet physics. But with minimal resources—binary variables on a plain grid with short-range interactions—we see the spontaneous emergence of cooperative, collective phenomena, as self-organizing patterns spread across the lattice. The model is not just a toy. Ten years ago Barry M. McCoy and Jean-Marie Maillard wrote:

It may be rightly said that the two dimensional Ising model… is one of the most important systems studied in theoretical physics. It is the first statistical mechanical system which can be exactly solved which exhibits a phase transition.


As I see it, the main question raised by the Ising model is this: At a specified temperature \(T\), what does the lattice of spins look like? Of course “look like” is a vague notion; even if you know the answer, you’ll have a hard time communicating it except by showing pictures. But the question can be reformulated in more concrete ways. We might ask: Which configurations of the spins are most likely to be seen at temperature \(T\)? Or, conversely: Given a spin configuration \(S\), what is the probability that \(S\) will turn up when the lattice is at temperature \(T\)?

Intuition offers some guidance on these points. Low-energy configurations should always be more likely than high-energy ones, at any finite temperature. Differences in energy should have a stronger influence at low temperature; as the system gets warmer, thermal fluctuations can mask the tendency of spins to align. These rules of thumb are embodied in a little fragment of mathematics at the very heart of the Ising model:

\[W_B = \exp\left(\frac{-E}{k_B T}\right).\]

Here \(E\) is the energy of a given spin configuration, found by scanning through the entire lattice and tallying the number of nearest-neighbor pairs that have parallel vs. antiparallel spins. In the denominator, \(T\) is the absolute temperature and \(k_B\) is Boltzmann’s constant, named for Ludwig Boltzmann, the Austrian maestro of statistical mechanics. The entire expression is known as the Boltzmann weight, and it determines the probability of observing any given configuration.

In standard physical units the constant \(k_B\) is about \(10^{-23}\) joules per kelvin, but the Ising model doesn’t really live in the world of joules and kelvins. It’s a mathematical abstraction, and we can measure its energy and temperature in any units we choose. The convention among theorists is to set \(k_B = 1\), and thereby eliminate it from the formula altogether. Then we can treat both energy and temperature as if they were pure numbers, without units.

Figure 3
Graph of Boltzmann weight as a function of configuration energy for T = 1, 2.7, and 5.

Figure 3 confirms that the equation for the Boltzmann weight yields curves with an appropriate general shape. Lower energies correspond to higher weights, and lower temperatures yield steeper slopes. These features make the curves plausible candidates for describing a physical system such as a ferromagnet. Proving that they are not only good candidates but the unique, true description of a ferromagnet is a mathematical and philosophical challenge that I decline to take on. Fortunately, I don’t have to. The model, unlike the magnet, is a human invention, and we can make it obey whatever laws we choose. In this case let’s simply decree that the Boltzmann distribution gives the correct relation between energy, temperature, and probability.

Note that the Boltzmann weight is said to determine a probability, not that it is a probability. It can’t be. \(W_B\) can range from zero to infinity, but a probability must lie between zero and one. To get the probability of a given configuration, we need to calculate its Boltzmann weight and then divide by \(Z\), the sum of the weights of all possible configurations—a process called normalization. For a model with \(10{,}000\) spins there are \(2^{10{,}000}\) configurations, so normalization is not a task to be attempted by direct, brute-force arithmetic.

It’s a tribute to the ingenuity of mathematicians that the impossible-sounding problem of calculating \(Z\) has in fact been conquered. In 1944 Lars Onsager published a complete solution of the two-dimensional Ising model—complete in the sense that it allows you to calculate the magnetization, the energy per spin, and a variety of other properties, all as a function of temperature. I would like to say more about Onsager’s solution, but I can’t. I’ve tried more than once to work my way through his paper, but it defeats me every time. I would understand nothing at all about this result if it weren’t for a little help from my friends. Barry Cipra, in a 1987 article, and Cristopher Moore and Stephan Mertens, in their magisterial tome The Nature of Computation, rederive the solution by other means. They relate the Ising model to more tractable problems in graph theory, where I am able to follow most of the steps in the argument. Even in these lucid expositions, however, I find the ultimate result unilluminating. I’ll cite just one fact emerging from Onsager’s difficult algebraic exercise. The exact location of the critical temperature, separating the magnetic from the nonmagnetic phases, is:

\[\frac{2}{\log{(1 + \sqrt{2})}} \approx 2.269185314.\]


For those intimidated by the icy crags of Mt. Onsager, I can recommend the warm blue waters of Monte Carlo. The math is easier. There’s a clear, mechanistic connection between microscopic events and macroscopic properties. And there are the visualizations—that lively dance of the mauve and the indigo—which offer revealing glimpses of what’s going on behind the mathematical curtains. All that’s missing is exactness. Monte Carlo studies can pin down \(T_C\) to several decinal places, but they will never give the algebraic expression found by Onsager.

The Monte Carlo method was devised in the years immediately after World War II by mathematicians and physicists working at the Los Alamos Laboratory in New Mexico. This origin story is not without controversy. Statisticians point out that William Gossett (“Student”) and Lord Kelvin both calculated probabilities with random numbers circa 1900. And there’s the even earlier (though likely apocryphal) story about Le Comte de Buffon’s experiments with a randomly tossed needle or stick for estimating the value of \(\pi\). Stanislaw Ulam replied: “It seems to me that while it is true that cavemen have already used divination and the Roman priests have tried to prophesy the future from the interiors of birds, there wasn’t anything in literature about solving differential and integral equations by means of suitable stochastic processes.” The key innovation was the use of randomness as a tool for estimation or approximation. This idea came from the mathematician Stanslaw Ulam. While recuperating from an illness, he passed the time playing a card game called Canfield solitaire. Curious about what pro­portion of the games were winnable, he realized he could estimate this number just by playing a great many games and recording the outcomes. That was the first application of the new method.

The second application was the design of nuclear weapons. The problem at hand was to understand the diffusion of neutrons through uranium and other materials. When a wandering neutron collided with an atomic nucleus, the neutron might be scattered in a new direction, or it might be absorbed by the nucleus and effectively disappear, or it might induce fission in the nucleus and thereby give rise to several more neutrons. Experiments had provided reasonable estimates of the probability of each of these events, but it was still hard to answer the crucial question: In a lump of fissionable matter with a particular shape, size, and composition, would the nuclear chain reaction fizzle or boom? The Monte Carlo method offered an answer by simulating the paths of thousands of neutrons, using random numbers to generate events with the appropriate probabilities. The first such calculations were done with the ENIAC, the vacuum-tube computer built at the University of Pennsylvania. Later the work shifted to the MANIAC, built at Los Alamos.

This early version of the Monte Carlo method is now sometimes called simple or naive Monte Carlo; I have also seen the term hit-or-miss Monte Carlo. The scheme served well enough for card games and for weapons of mass destruction, but the Los Alamos group never attempted to apply it to a problem anything like the Ising model. It would not have worked if they had tried. I know that because textbooks say so, but I had never seen any discussion of exactly how the model would fail. So I decided to try it for myself.

My plan was indeed simple and naive and hit-or-miss. First I generated a random sample of \(10{,}000\) spin configurations, drawn independently with uniform probability from the set of all possible states of the lattice. This was easy to do: I constructed the samples by the computational equivalent of tossing a fair coin to assign a value to each spin. Then I calculated the energy of each configuration and, assuming some definite temperature \(T\), assigned a Boltzmann weight. I still couldn’t convert the Boltzmann weights into true probabilities without knowing the sum of all \(2^{10{,}000}\) weights, but I could sum up the weights of the \(10{,}000\) configurations in the sample. Dividing each weight by this sum yields a relative probability: It estimates how frequently (at temperature \(T\)) we can expect to see a member of the sample relative to all the other members.

At extremely high temperatures—say \(T \gt 1{,}000\)—this procedure works pretty well. That’s because all configurations are nearly equivalent at those temperatures; they all have about the same relative probability. On cooling the system, I hoped to see a gradual skewing of the relative probabilities, as configurations with lower energy are given greater weight. What happens, however, is not a gradual skewing but a spectacular collapse. At \(T = 2\) the lowest-energy state in my sample had a relative probability of \(0.9999999979388337\), leaving just \(0.00000000206117\) to be shared among the other \(9{,}999\) members of the set.

Figure 4Histograms showing energy peaks for Boltzmann distributions at T=2, 5, 10, and 50, and for randomly generated lattice configurations.

The fundamental problem is that a small sample of randomly generated lattice configurations will almost never include any states that are commonly seen at low temperature. The histograms of Figure 4 show Boltzmann distributions at various temperatures (blue) compared with the distribution of randomly generated states (red). The random distribution is a slender peak centered at zero energy. There is slight overlap with the Boltzmann distribution at \(T = 50\), but none whatever for lower temperatures.

There’s actually some good news in this fiasco. The failure of random sampling indicates that the interesting states of the Ising system—those which give the model its distinctive behavior—form a tiny subset buried within the enormous space of \(2^{10{,}000}\) configurations. If we can find a way to focus on that subset and ignore the rest, the job will be much easier.


The means to focus more narrowly came with a second wave of Monte Carlo methods, also emanating from Los Alamos. The foundational document was a paper titled “Equation of State Calculations by Fast Computing Machines,” published in 1953. Among the five authors, Nicholas Metropolis was listed first (presumably for alphabetical reasons), and his name remains firmly attached to the algorithm presented in the paper.

With admirable clarity, Metropolis et al. explain the distinction between the old and the new Monte Carlo: “[I]nstead of choosing configurations randomly, then weighting them with \(\exp(-E/kT)\), we choose configurations with a probability \(\exp(-E/kT)\) and weight them evenly.” Starting from an arbitrary initial state, the scheme makes small, random modifications, with a bias favoring configurations with a lower energy (and thus higher Boltzmann weight), but not altogether excluding moves to higher-energy states. After many moves of this kind, the system is almost certain to be meandering through a neighborhood that includes the most probable configurations. Methods based on this principle have come to be known as MCMC, for Markov chain Monte Carlo. The Metropolis algorithm and Glauber dynamics are the best-known exemplars.

Roy Glauber also had Los Alamos connections. He worked there during World War II, in the same theory division that was home to Ulam, John von Neumann, Hans Bethe, Richard Feynman, and many other notables of physics and mathematics. But Glauber was a very junior member of the group; he was 18 when he arrived, and a sophomore at Harvard. His one paper on the Ising model was published two decades later, in 1963, and makes no mention of his former Los Alamos colleagues. It also makes no mention of Monte Carlo methods; nevertheless, Glauber dynamics has been taken up enthusiastically by the Monte Carlo community.

When applied to the Ising model, both the Metropolis algorithm and Glauber dynamics work by focusing attention on a single spin at each step, and either flipping the selected spin or leaving it unchanged. Thus the system passes through a sequence of states that differ by at most one spin flip. Statistically speaking, this procedure sounds a little dodgy. Unlike the naive Monte Carlo approach, where successive states are completely independent, MCMC generates configurations they are closely correlated. It’s a biased sample. To overcome the bias, the MCMC process has to run long enough for the correlations to fade away. With a lattice of \(N\) sites, a common protocol retains only every \(N\)th sample, discarding all those in between.

The mathematical justification for the use of correlated samples is the theory of Markov chains, devised by the Russian mathematician A. A. Markov circa 1900. It is a tool for calculating probabilities when each event depends on the previous event. And, in the Monte Carlo method, it allows one to work with those probabilities without getting bogged down in the morass of normalization.


The Metropolis and the Glauber algorithms are built on the same armature. They both rely on two main components: a visitation sequence and an acceptance function. The visitation sequence determines which lattice site to visit next; in effect, it shines a spotlight on one selected spin, proposing to flip it to the opposite orientation. The acceptance function determines whether to accept this proposal (and flip the spin) or reject it (and leave the existing spin direction unchanged). Each iteration of this two-phase process constitutes one “microstep” of the Monte Carlo procedure. Repeating the procedure \(N\) times constitutes a “macrostep.” Thus one macrostep amounts to one microstep per spin.

In the Metropolis algorithm, the visitation order is deterministic. The program sweeps through the lattice methodically, repeating the same sequence of visits in every macrostep. The original 1953 presentation of the algorithm did not prescribe any specific sequence, but the procedure was clearly designed to visit each site exactly once during a sweep. The version of the Metropolis algorithm in Program 1 adopts the most obvious deterministic option: “typewriter order.” The program chugs through the first row of the lattice from left to right, then goes through the second row in the same way, and so on down to the bottom.

Glauber dynamics takes a different approach: At each microstep the algorithm selects a single spin at random, with uniform probability, from the entire set of \(N\) spins. In other words, every spin has a \(1 / N\) chance of being chosen at each microstep, whether or not it has been chosen before. A macrostep lasts for \(N\) microsteps, but the procedure does not guarantee that every spin will get a turn during every sweep. Some sites will be passed over, while others are visited more than once. Still, as the number of steps goes to infinity, all the sites eventually get equal attention.

So much for the visitation sequence; now on to the acceptance function. It has three parts:

  1. Calculate \(\Delta E\), the change in energy that would result from flipping the selected spin \(s\). To determine this value, we need to examine \(s\) itself and its four nearest neighbors.
  2. Based on \(\Delta E\) and the temperature \(T\), calculate the probability \(p\) of flipping spin \(s\).
  3. Generate a random number \(r\) in the interval \([0, 1)\). If \(r \lt p\), flip the selected spin; otherwise leave it as is.

Part 2 of the acceptance rule calls for a mathematical function that maps values of \(\Delta E\) and \(T\) to a probability \(p\). To be a valid probability, \(p\) must be confined to the interval \([0, 1]\). To make sense in the context of the Monte Carlo method, the function should assign a higher probability to spin flips that reduce the system’s energy, without totally excluding those that bring an energy increase. And this preference for negative \(\Delta E\) should grow sharper as T gets lower. The specific functions chosen by the Metropolis and the Glauber algorithms satisfy both of these criteria.

Let’s begin with the Glauber acceptance function, which I’m going to call the G-rule:

\[p = \frac{e^{-\Delta E/T}}{1 + e^{-\Delta E/T}}.\]

Parts of this equation should look familiar. The expression for the Boltzmann weight, \(e^{-\Delta E/T}\), appears twice, except that the configuration energy \(E\) is replaced by \(\Delta E\), the change in energy when a specific spin is flipped. Figure 5Glauber probability of spin flip as a function of Delta E and TBut where the Boltzmann weight ranges from zero to infinity, the quotient of exponentials in the G-rule stays within the bounds of \(0\) to \(1\). The curve at right shows the probability distribution for \(T = 2.7\), near the critical point for the onset of magnetization. To get a qualitative understanding of the form of this curve, con­sider what happens when \(\Delta E\) grows without bound toward positive infinity: The numer­ator of the fraction goes to \(0\) while the denominator goes to \(1\), leaving a quotient that approaches \(0.0\). At the other end of the curve, as \(\Delta E\) goes to negative infinity, both numerator and denominator increase without limit, and the probability approaches (but never quite reaches) \(1.0\). Between these extremes, the curve is symmetrical and smooth. It looks like it would make a pleasant ski run.

The Metropolis acceptance criterion also includes the expression \(e^{-\Delta E/T}\), but the function and the curve are quite different. The acceptance probability is defined in a piecewise fashion:

\[p = \left\{\begin{array}{cl}
1 & \text { if } \quad \Delta E \leq 0 \\
e^{-\Delta E/T} & \text { if } \quad \Delta E>0
\end{array}\right.\]

Figure 6Metropolis probability of spin flip as a function of Delta E and T

In words, the rule says: If flipping a spin would reduce the energy of the system or leave it unchanged, always do it; otherwise, flip the spin with probability \(e^{-\Delta E/T}\). The prob­ability curve (left) has a steep escarpment; if this one is a ski slope, it rates a black dia­mond. Unlike the smooth and symmetrical Glauber curve, this one has a sharp corner, as well as a strong bias. Consider a spin with \(\Delta E = 0\). Glauber flips such a spin with probability \(1/2\), but Metropolis always flips it.

The graphs in Figure 7 compare the two acceptance functions over a range of temperatures. The curves differ most at the highest temperatures, and they become almost indistinguishable at the lowest temperatures, where both curves approximate a step function. Although both functions are defined over the entire real number line, the two-dimensional Ising model allows \(\Delta E\) to take on only five distinct values: \(–8, –4, 0, +4,\) and \(+8\). Thus the Ising probability functions are never evaluated anywhere other than the positions marked by colored dots.

Figure 7graphs of Glauber and Metropolis-Hatsings acceptance probability at four temperatures

Here are JavaScript functions that implement a macrostep in each of the algorithms, with their differences in both visitation sequence and acceptance function:

function runMetro() {
  for (let y = 0; y < gridSize; y++) {
    for (let x = 0; x < gridSize; x++) {
      let deltaE = calcDeltaE(x, y);
      let boltzmann = Math.exp(-deltaE/temperature);
      if ((deltaE <= 0) || (Math.random() < boltzmann)) {
        lattice[x][y] *= -1;
      }
    }
  }
  drawLattice();
}

function runGlauber() {
  for (let i = 0; i < N; i++) {
    let x = Math.floor(Math.random() * gridSize);
    let y = Math.floor(Math.random() * gridSize);
    let deltaE = calcDeltaE(x, y);
    let boltzmann = Math.exp(-deltaE/temperature);
    if (Math.random() < (boltzmann / (1 + boltzmann))) {
      lattice[x][y] *= -1;
    }
  }
  drawLattice();
}

(As I mentioned above, the rest of the source code for the simuations is available on GitHub.)


We’ve seen that the Metropolis and the Glauber algorithms differ in their choice of both visitation sequence and acceptance function. They also produce different patterns or textures when you watch them in action on the computer screen. But what about the numbers? Do they predict different properties for the Ising ferromagnet?

A theorem mentioned throughout the MCMC literature says that these two algorithms (and others like them) should give identical results when properties of the model are measured at thermal equilibrium. I have encountered this statement many times in my reading, but until a few weeks ago I had never tested it for myself. Here are some magnetization data that look fairly convincing:

Magnitude of Magnetization

T = 1.0 T = 2.0 T = 2.7 T = 3.0 T = 5.0 T = 10.0
Metropolis 0.9993 0.9114 0.0409 0.0269 0.0134 0.0099
Glauber 0.9993 0.9118 0.0378 0.0274 0.0136 0.0100

The table records the absolute value of the magnetization in Metropolis and Glauber simulations at various temperatures. Five of the six measure­ments differ by less than \(0.001\); the exception cames at \(T = 2.7\), near the critical point, where the difference rises to about \(0.003\). Note that the results are consistent with the presence of a phase transition: Magnetization remains close to \(0\) down to the critical point and then approaches \(1\) at lower temperatures. (By reporting the magnitude, or absolute value, of the magnetization, we treat all-up and all-down states as equivalent.)

I made the measurements by first setting the temperature and then letting the sim­ulation run for at least 1,000 macrosteps in order to reach an equilibrium condition. How do I know that 1,000 macrosteps is enough to reach equilibrium? There is a fascinating body of work on this question, full of ingenious ideas, such as running two simulations that approach equilibrium from opposite directions and waiting until they agree. I took the duller approach of just waiting until the numbers stopped changing. I also started each run from an initial state on the same side of \(T_C\) as the target temperature.Following this “burn-in” period, the simulation continued for another 100 macrosteps; during this phase I counted up and down spins after each macrostep. The entire procedure, including both burn-in and measurement periods, was repeated 100 times, after which I averaged all the measurements for each temperature.

When I first looked at these results and saw the close match between Metropolis and Glauber, I felt a twinge of paradoxical surprise. I call it paradoxical because I knew before I started what I would see, and that’s exactly what I did see, so obviously I should not have been surprised at all. But some part of my mind didn’t get that memo, and as I watched the two algorithms converge to the same values all across the temperature scale, it seemed remarkable.

The theory behind this convergence was apparently understood by the pioneers of MCMC in the 1950s. The theorem states that any MCMC algorithm will produce the same distribution of states at equilibrium, as long as the algorithm satisfies two conditions, called ergodicity and detailed balance.

The adjective ergodic was coined by Boltzmann, and is usually said to have the Greek roots εργον οδος, meaning something like “energy path.” Giovanni Gallavotti disputes this etymology, suggesting a derivation from εργον ειδoς, which he translates as “monode with a given energy.” Take your pick.Ergodicity requires that the system be able move from any one configuration to any other configuration in a finite number of steps. In other words, there are no cul de sac states you might wander into and never be able to escape, or border walls that divide the space into isolated regions. The Metropolis and Glauber algorithms satisfy this condition because every transition between states has a nonzero probability. (In both algorithms the acceptance probability comes arbitrarily close to zero but never touches it.) In the specific case of the \(100 \times 100\) lattice I’ve been playing with, any two states are connected by a path of no more than \(10{,}000\) steps.

Both algorithms also exhibit detailed balance, which is essentially a requirement of reversibility. Suppose that while watching a model run, you observe a transition from state \(A\) to state \(B\). Detailed balance says that if you continue observing long enough, you will see the inverse transition \(B \rightarrow A\) with the same frequency as \(A \rightarrow B\). Given the shapes of the acceptance curves, this assertion may seem implausible. If \(A \rightarrow B\) is energetically favorable, then \(B \rightarrow A\) must be unfavorable, and it will have a lower probability. But there’s another factor at work here. Remember we are assuming the system is in equilibrium, which implies that the occupancy of each state—or the amount of time the system spends in that state—is proportional to the state’s Boltzmann weight. Because the system is more often found in state \(B\), the transition \(B \rightarrow A\) has more chances to be chosen, counterbalancing the lower intrinsic probability.


The claim that Metropolis and Glauber yield identical results applies only when the Ising system is in equilibrium—poised at the eternal noon where the sun stands still and nothing ever changes. For Metropolis and his colleagues at Los Alamos in the early 1950s, understanding the equilibrium behavior of a computational model was challenge enough. They were coaxing answers from a computer with about four kilobytes of memory. Ten years later, however, Glauber wanted to look beyond equilibrium. For example, he wanted to know what happens when the temperature suddenly changes. How do the spins reorganize themselves during the transient period between one equilibrium state and another? He designed his version of the Ising model specifically to deal with such dynamic situations. He wrote in his 1963 paper:

If the mathematical problems of equilibrium statistical mechanics are great, they are at least relatively well-defined. The situation is quite otherwise in dealing with systems which undergo large-scale changes with time…. We have attempted, therefore, to devise a form of the Ising model whose behavior can be followed exactly, in statistical terms, as a function of time.

The data were gathered with Program 1, but using commands that have to be invoked from the console rather than the web interface. See the source code for details.So how does the Glauber model behave following an abrupt change in temperature? And how does it compare with the Metropolis model? Let’s try the experiment. We’ll sim­ulate an Ising lattice at high temperature (\(T = 10.0\)), and let the program run long enough to be sure the system is in thermal equilibrium. Then we’ll instantaneously lower the temperature to \(T = 2.0\), which is well below the critical point. During this flash-freeze process, we’ll monitor the magnetization of the lattice. The graph below records the magnetization after every tenth Monte Carlo macrostep. The curves are averages computed over 500 repetitions of the experiment.

Figure 8Step-response curves for Metropolis and Glauber models when temperature changes abruptly from T=10 to T=2.

Clearly, in this dynamic situation, the algorithms are not identical or interchangeable. The Metropolis program adapts more quickly to the cooler environment; Glauber produces a slower but steadier rise in magnetization. The curves differ in shape, with Metropolis exhibiting a distinctive “knee” where the slope flattens. I want to know what causes these differences, but before digging into that question it seems important to understand why both algorithms are so agonizingly slow. At the right edge of the graph the blue Metropolis curve is approaching the equilibrium value of magnetization (which is about 0.91), but it has taken 7,500 Monte Carlo macrosteps (or 75 million microsteps) to get there. The red Glauber curve will require many more. What’s the holdup?

To put this sluggishness in perspective, let’s look at the behavior of local spin correlations measured under the same circumstances. Graphing the average nearest-neighbor correlation following a sudden temperature drop produces these hockey-stick curves:

Figure 9rapid change in local spin correlations following an abrupt drop in temperature

The response is dramatically faster; both algorithms reach quite high levels of local correlation within just a few macrosteps.

For a hint of why local correlations grow so much faster than global magnetization, it’s enough to spend a few minutes watching the Ising simulation evolve on the computer screen. When the temperature plunges from warm \(T = 5\) to frigid \(T = 2\), nearby spins have a strong incentive to line up in parallel, but magnetization does not spread uniformly across the entire lattice. Small clusters of aligned spins start expanding, and they merge with other clusters of the same polarity, thereby growing larger still. It doesn’t take long, however, before clusters of opposite polarity run into one another, blocking further growth for both. From then on, magetization is a zero-sum game: The up team can win only if the down team loses.

Figure 10nine stages in the evolution of an Ising lattice under the Metropolis algorithm, following change in temperature from T=5 to T=2

Figure 10 shows the first few Monte Carlo macrosteps following a flash freeze. The initial configuration at the upper left reflects the high-temperature state, with a nearly random, salt-and-pepper mix of up and down spins. The rest of the snapshots (reading left to right and top to bottom) show the emergence of large-scale order. Prominent clusters appear after the very first macrostep, and by the second or third step some of these blobs have grown to include hundreds of lattice sites. But the rate of change becomes sluggish thereafter. The balance of power may tilt one way and then the other, but it’s hard for either side to gain a permanent advantage. The mottled, camouflage texture will persist for hundreds or thousands of steps.

If you choose a single spin at random from such a mottled lattice, you’ll almost surely find that it lives in an area where most of the neighbors have the same orientation. Hence the high levels of local correlation. But that fact does not imply that the entire array is approaching unanimity. On the contrary, the lattice can be evenly divided between up and down domains, leaving a net magnetization near zero. (Yes, it’s like political polarization, where homogeneous states add up to a deadlocked nation.)

The images in Figure 11 show three views of the same state of an Ising lattice. At left is the conventional representation, with sinuous, interlaced territories of nearly pure up and down spins. The middle panel shows the same configuration recolored according to the local level of spin correlation. The vast majority of sites (lightest hue) are surrounded by four neighbors of the same orientation; they correspond to both the mauve and the indigo regions of the leftmost image. Only along the boundaries between domains is there any substantial conflict, where darker colors mark cells whose neighbors include spins of the opposite orientation. The panel at right highlights a special category of sites—those with exactly two parallel and two antiparallel neighbors. They are special because they are tiny neutral territories wedged between the contending factions. Flipping such a spin does not alter its correlation status; both before and after it has two like and two unlike neighbors. Flipping a neutral spin also does not alter the total energy of the system. But it can shift the magnetization. Indeed, flipping such “neutral” spins is the main agent of evolution in the Ising system at low temperature.Figure 11

Three views of the same lattice: up and down domains, boundaries between domains, highlighted cells with equal numbers of like and unlike neighbors

The struggle to reach full magnetization in an Ising lattice looks like trench warfare. Contending armies, almost evenly matched, face off over the boundary lines between up and down territories. All the action is along the borders; nothing that happens behind the lines makes much difference. Even along the boundaries, some sections of the front are static. If a domain margin is a straight line parallel to the \(x\) or \(y\) axis, the sites on each side of the border have three friendly neighbors and only one enemy; they are unlikely to flip. The volatile neutral sites that make movement possible appear only at corners and along diagonals, where neighborhoods are evenly split.

There are Monte Carlo algorithms that flip only neutral spins. They have the pleasant property of conserving energy, which is not true of the Metropolis and Glauber algorithms.Neutral sites become rare as the light and dark regions coalesce into fewer but larger blobs. This scarcity of freely flippable spins leaves the Ising gears grinding without lubricant, and not making much progress. The situation is particularly acute in those cases where a broad stripe extends all the way across the lattice from left to right or from top to bottom. If the stripe’s boundaries are exactly horizontal or vertical, there will be no neutral sites at all. I’ll return to this situation below.


From these observations and ruminations I feel I’ve acquired some intuition about why my Monte Carlo simulations bog down during the transition from a chaotic to an ordered state. But why is the Glauber algorithm even slower than the Metropolis?

Since the schemes differ in two features—the visitation sequence and the acceptance function—it makes sense to investigate which of those features has the greater effect on the convergence rate. That calls for another computational experiment.

The tableau below is a mix-and-match version of the MCMC Ising simulation. In the control panel you can choose the visitation order and the acceptance function independently. If you select a deterministic visitation order and the M-rule acceptance function, you have the classical Metropolis algorithm. Likewise random order and the G-rule correspond to Glauber dynamics. But you can also pair deterministic order with the G-rule or random order with the M-rule. (The latter mixed-breed choice is what I unthinkingly implemented in my 2019 program.)

I have also included an acceptance rule labeled M*, which I’ll explain below.

Watching the screen while switching among these alternative components reveals that all the combinations yield different visual textures, at least at some temperatures. Also, it appears there’s something special about the pairing of deterministic visitation order with the M-rule acceptance function (i.e., the standard Metropolis algorithm).

Try setting the temperature to 2.5 or 3.0. I find that the distinctive sensation of fluttery motion—bird flocks migrating across the screen—appears only with the deterministic/M-rule combination. With all other pairings, I see amoeba-like blobs that grow and shrink, fuse and divide, but there’s not much coordinated motion.

Now lower the temperature to about 1.5, and alternately click Run and Reset until you get a persistent bold stripe that crosses the entire grid either horizontally or vertically. Diagonal stripes are also possible, but rare.(This may take several tries.) Again the deterministic/M-rule combination is different from all the others. With this mode, the stripe appears to wiggle across the screen like a millipede, either right to left or bottom to top. Changing either the visitation order or the acceptance function suppresses this peristaltic motion; the stripe may still have pulsating bulges and constrictions, but they’re not going anywhere.

These observations suggest some curious interactions between the visitation order and the acceptance function, but they do not reveal which factor gives the Metropolis algorithm its speed advantage. Using the same program, however, we can gather some statistical data that might help answer the question.

Figure 12Magnetization curves for four combinations of visitation and acceptance functions

These curves were a surprise to me. From my earlier experiments I already knew that the Metropolis algorithm—the combination of elements in the blue curve—would outperform the Glauber version, corresponding to the red curve. But I expected the acceptance function to account for most of the difference. The data do not support that supposition. On the contrary, they suggest that both elements matter, and the visitation sequence may even be the more important one. A deterministic visitation order beats a random order no matter which acceptance function it is paired with.

My expectations were based mainly on discussions of the “mixing time” for various Monte Carlo algorithms. Mixing time is the number of steps needed for a simulation to reach equilibrium from an arbitrary initial state, or in other words the time needed for the system to lose all memory of how it began. If you care only about equilibrium properties, then an algorithm that offers the shortest mixing time is likely to be preferred, since it also minimizes the number of CPU cycles you have to waste before you can start taking data. Discussions of mixing time tend to focus on the acceptance function, not the visitation sequence. In particular, the M-rule acceptance function of the Metropolis algorithm was explicitly designed to minimize mixing time.

What I am measuring in my experiments is not exactly mixing time, but it’s closely related. Going from an arbitrary initial state to equilibrium at a specified temperature is much like a transition from one temperature to another. What’s going on inside the model is similar. Thus if the acceptance function determines the mixing time, I would expect it also to be the major factor in adapting to a new temperature regime.

On the other hand, I can offer a plausible-sounding theory of why visitation order might matter. The deterministic model scans through all \(10{,}000\) lattice sites during every Monte Carlo macrostep; each such sweep is guaranteed to visit every site exactly once. The random order makes no such promise. In that algorithm, each microstep selects a site at random, whether or not it has been visited before. A macrostep concludes after \(10{,}000\) such random choices. Under this protocol some sites are passed over without being selected even once, while others are chosen two or more times. How many sites are likely to be missed? During each microstep, every site has the same probability of being chosen, namely \(1 / 10{,}000\). Thus the probability of not being selected on any given turn is \(9{,}999 / 10{,}000\). For a site to remain unvisited throughout an entire macrostep, it must be passed over \(10{,}000\) times in a row. The probability of that event is \((9{,}999 / 10{,}000)^{10{,}000}\), which works out to about \(0.368\).For \(N\) sites, the probability is \(((N - 1) / N)^N\); as \(N\) goes to infinity this expression converges to \(1 / e \approx 0.367879\). Thus in each macrostep roughly \(3{,}700\) of the \(10{,}000\) spins are simply never called on. They have no chance of being flipped no matter what the acceptance function might say.

Excluding more than a third of the sites on every pass through the lattice seems certain to have some effect on the outcome of an experiment. In the long run the random selection process is fair, in the sense that every spin is sampled at the same frequency. But the rate of convergence to the equilibrium state may well be lower.

There are also compelling arguments for the importance of the acceptance function. A key fact mentioned by several authors is that the M acceptance rule leads to more spin flips per Monte Carlo step. If the energy change of a proposed flip is favorable or neutral, the M-rule always approves the flip, whereas the G-rule rejects some proposed flips even when they lower the energy. Indeed, for all values of \(T\) and \(\Delta E\) the M-rule gives a higher probability of acceptance than the G-rule does. This liberal policy—if in doubt, flip—allows the M-rule to explore the space of all possible spin configurations more rapidly.

The discrete nature of the Ising model, with just five possible values of \(\Delta E\), introduces a further consideration. At \(\Delta E = \pm 4\) and at \(\Delta E = \pm 8\), the M-rule and the G-rule don’t actually differ very much when the temperature is below the critical point (see Figure 7). The two curves diverge only at \(\Delta E = 0\): The M-rule invariably flips a spin in this circumstance, whereas the G-rule does so only half the time, assigning a probability of \(0.5\). This difference is important because the lattice sites where \(\Delta E = 0\) are the ones that account for almost all of the spin flips at low temperature. These are the neutral sites highlighted in the right panel ofFigure 11, the ones with two like and two unlike neighbors.

This line of thought leads to another hypothesis. Maybe the big difference between the Metropolis and the Glauber algorithms has to do with the handling of this single point on the acceptance curve. And there’s an obvious way to test the hypothesis: Simply change the M-rule at this one point, having it toss a coin whenever \(\Delta E = 0\). The definition becomes:

\[p = \left\{\begin{array}{cl}
1 & \text { if } \quad \Delta E \lt 0 \\
\frac{1}{2} & \text { if } \quad \Delta E = 0 \\
e^{-\Delta E/T} & \text { if } \quad \Delta E>0
\end{array}\right.\]

This modified acceptance function is the M* rule offered as an option in Program 2. Watching it in action, I find that switching the Metropolis algorithm from M to M* robs it of its most distinctive traits: At high temperature the fluttering birds are banished, and at low temperature the wiggling worms are immobilized. The effects on convergence rates are also intriguing. In the Metropolis algorithm, replacing M with M* greatly diminishes convergence speed, from a standout level to just a little better than average. At the same time, in the Glauber algorithm replacing G with M* brings a considerable performance improvement; when combined with random visitation order, M* is superior not only to G but also to M.

Figure 13Magnetization curves for the M* acceptance rule, with dimmed version of the M rule and G rule curves for comparison.

I don’t know how to make sense of all these results except to suggest that both the visitation order and the acceptance function have important roles, and non-additive interactions between them may also be at work. Here’s one further puzzle. In all the experiments described above, the Glauber algorithm and its variations respond to a disturbance more slowly than Metropolis. But before dismissing Glauber as the perennial laggard, take a look at Figure 14.

Figure 14Order to disorder transition

Here we’re observing a transition from low to high temperature, the opposite of the situation discussed above. When going in this direction—from an orderly phase to a chaotic one, melting rather than freezing—both algorithms are quite zippy, but Glauber is a little faster than Metropolis. Randomness, it appears, is good for randomization. That sounds sensible enough, but I can’t explain in any detail how it comes about.


Up to this point, a deterministic visitation order has always meant the typewriter scan of the lattice—left to right and top to bottom. Of course this is not the only deterministic route through the grid. In Program 3 you can play with a few of the others.

Why should visitation order matter at all? As long as you touch every site exactly once, you might imagine that all sequences would produce the same result at the end of a macrostep. But it’s not so, and it’s not hard to see why. Whenever two sites are neighbors, the outcome of applying the Monte Carlo process can depend on which neighbor you visit first.

Consider the cruciform configuration at right. At first glance, you might assume that the dark central square will be unlikely to change its state. Visitation sequence detail exampleAfter all, the central square has four like-colored neighbors; if it were to flip, it would have four opposite-colored neighbors, and the energy associated with those spin-spin interactions would rise from \(-4\) to \(+4\). Any visitation sequence that went first to the central square would almost surely leave it unflipped. However, when the Metropolis algorithm comes tap-tap-tapping along in typewriter mode, the central cell does in fact change color, and so do all four of its neighbors. The entire structure is annihilated in a single sweep of the algorithm. (The erased pattern does leave behind a ghost—one of the diagonal neighbor sites flips from light to dark. But then that solitary witness disappears on the next sweep.)

To understand what’s going on here, just follow along as the algorithm marches from left to right and top to bottom through the lattice. When it reaches the central square of the cross, it has already visited (and flipped) the neighbors to the north and to the west. Hence the central square has two neighbors of each color, so that \(\Delta E = 0\). According to the M-rule, that square must be flipped from dark to light. The remaining two dark squares are now isolated, with only light neighbors, so they too flip when their time comes.

The underlying issue here is one of chronology—of past, present, and future. Each site has its moment in the present, when it surveys its surroundings and decides (based on the results of the survey) whether or not to change its state. But in that present moment, half of the site’s neighbors are living in the past—the typewriter algorithm has already visited them—and the other half are in the future, still waiting their turn.

A well-known alternative to the typewriter sequence might seem at first to avoid this temporal split decision. Superimposing a checkerboard pattern on the lattice creates two sublattices that do not communicate for purposes of the Ising model. Each black square has only white neighbors, and vice versa. Thus you can run through all the black sites (in any order; it really doesn’t matter), flipping spins as needed. Afterwards you turn to the white sites. These two half-scans make up one macrostep. Throughout the process, every site sees all of its neighbors in the same generation. And yet time has not been abolished. The black cells, in the first half of the sweep, see four neighboring sites that have not yet been visited. The white cells see neighbors that have already had their chance to flip. Again half the neighbors are in the past and half in the future, but they are distributed differently.

There are plenty of other deterministic sequences. You can trace successive diagonals; in Program 3 they run from southwest to northeast. There’s the ever-popular boustrophedonic order, following in the footsteps of the ox in the plowed field. More generally, if we number the sites consecutively from \(1\) to \(10{,}000\), any permutation of this sequence represents a valid visitation order, touching each site exactly once. There are \(10{,}000!\) such permutations, a number that dwarfs even the \(2^{10{,}000}\) configurations of the binary-valued lattice. The permuted choice in Program 3 selects one of those permutations at random; it is then used repeatedly for every macrostep until the program is reset. The re-permuted option is similar but selects a new permutation for each macrostep. The random selection is here for comparison with all the deterministic variations.

(There’s one final button labeled simultaneous, which I’ll explain below. If you just can’t wait, go ahead and press it, but I won’t be held responsible for what happens.)

The variations add some further novelties to the collection of curious visual effects seen in earlier simulations. The fluttering wings are back, in the diagonal as well as the typewriter sequences. Checkerboard has a different rhythm; I am reminded of a crowd of frantic commuters in the concourse of Grand Central Terminal. Boustrophedon is bidirectional: The millipede’s legs carry it both up and down or both left and right at the same time. Permuted is similar to checkerboard, but re-permuted is quite different.

The next question is whether these variant algorithms have any quantitative effect on the model’s dynamics. Figure 15 shows the response to a sudden freeze for seven visitation sequences. Five of them follow roughly the same arcing trajectory. Typewriter remains at the top of the heap, but checkerboard, diagonal, boustrophedon, and permuted are all close by, forming something like a comet tail. The random algorithm is much slower, which is to be expected given the results of earlier experiments.

Figure 15Magnetization curves for seven visitation sequences: typewriter, checkerboard, diagonal, boustrophedon, permuted, re-permuted, and random. Each curve shows the evolution of magnetization over 1,000 macrosteps after temperature shifts from T=10 to T=2.

The intriguing case is the re-permuted order, which seems to lie in the no man’s land between the random and the deterministic algorithms. Perhaps it belongs there. In earlier comparisons of the Metropolis and Glauber algorithms, I speculated that random visitation is slower to converge because many sites are passed over in each macrostep, while others are visited more than once. That’s not true of the re-permuted visitation sequence, which calls on every site exactly once, though in random order. The only difference between the permuted algorithm and the re-permuted one is that the former reuses the same permutation over and over, whereas re-permuted creates a new sequence for every macrostep. The faster convergence of the static permuted algorithm suggests there is some advantage to revisiting all the same sites in the same order, no matter what that order may be. Most likely this has something to do with sites that get switched back and forth repeatedly, on every sweep.


Now for the mysterious simultaneous visitation sequence. If you have not played with it yet in Program 3, I suggest running the following experiment. Select the typewriter sequence, press the Run button, reduce the temperature to 1.10 or 1.15, and wait until the lattice is all mauve or all indigo, with just a peppering of opposite-color dots. (If you get a persistent wide stripe instead of a clear field, raise the temperature and try again.) Now select the simultaneous visitation order. I have deliberately slowed this version of the model by a factor of 10, to make the nature of the action clearer.Most likely nothing much will happen for a little while, then you’ll notice tiny patches of checkerboard pattern, with all the individual cells in these patches blinking from light to dark and back again on every other cycle. Then notice that the checkerboard patches are growing. When they touch, they merge, either seamlessly if they have the same polarity or with a conspicuous suture where opposite polarities meet. Eventually the checkerboards will cover the whole screen. Furthermore, once the pattern is established, it will persist even if you raise the temperature all the way to the top, where any other algorithm would produce a roiling random stew.

This behavior is truly weird but not inexplicable. The algorithm behind it is one that I have always thought should be the best approach to a Monte Carlo Ising simulation. In fact it seems to be just about the worst.

All of the other visitation sequences are—as the term suggests they should be—sequential. They visit one site at a time, and differ only in how they decide where to go next. If you think about the Ising model as if it were a real physical process, this kind of serialization seems pretty implausible. I can’t bring myself to believe that atoms in a ferromagnet politely take turns in flipping their spins. And surely there’s no central planner of the sequence, no orchestra conductor on a podium, pointing a baton at each site when its turn comes.

Natural systems have an all-at-onceness to them. They are made up of many independent agents that are all carrying out the same kinds of activities at the same time. If we could somehow build an Ising model out of real atoms, then each cell or site would be watching the state of its four neighbors all the time, and also sensing thermal agitation in the lattice; it would decide to flip whenever circumstances favored that choice, although there might be some randomness to the timing. If we imagine a computer model of this process (yes, a model of a model), the most natural implementation would require a highly parallel machine with one processor per site.

Lacking such fancy hardware, I make due with fake parallelism. The simultaneous algorithm makes two passes through the lattice on every macrostep. On the first pass, it looks at the neighborhood of each site and decides whether or not to flip the spin, but it doesn’t actually make any changes to the lattice. Instead, it uses an auxiliary array to keep track of which spins are scheduled to flip. Then, after all sites have been surveyed in the first pass, the second pass goes through the lattice again, flipping all the spins that were designated in the first pass. The great advantage of this scheme is that it avoids the temporal oddities of working within a lattice where some spins have already been updated and others have not. In the simultaneous algorithm, all the spins make the transition from one generation to the next at the same instant.

When I first wrote a program to implement this scheme, almost 40 years ago, I didn’t really know what I was doing, and I was utterly baffled by the outcome. The information mechanics group at MIT (Ed Fredkin, Tommaso Toffoli, Norman Margolus, and Gérard Vichniac) soon came to my rescue and explained what was going on, but all these years later I still haven’t quite made my peace with it.

Although the pattern looks like what you might see in an antiferromagnet—a material in which spins prefer antiparallel alignment—the resemblance is deceptive. For a true antiferromagnet the checkerboard arrangement is stable; here it is maximally unstable.Once you’ve observed the “blinking checkerboard catastrophe,” it’s not hard to understand the mechanism. For a ferromagnetic Ising model, a checkerboard pattern of alternating up and down spins has the highest possible energy and should therefore be the least likely configuration of the lattice. Every single site is surrounded by four opposite-color neighbors and therefore has a strong incentive to flip. That’s just the problem. With the simultaneous update rule, every spin does flip, with the result that the new con­fig­uration is a mirror image of the previous one, with every up spin become a down and vice versa. When the next round begins, every site wants to flip again.

What continues to disturb me about this phenomenon is that I still think the simultaneous update rule is in some sense more natural or realistic than many of the alternatives. It is closer to how the world works—or how I imagine that it works—than any serial ordering of updates. Yet nature does not create magnets that continually swing between states that have the highest possible energy. (A 2002 paper by Gabriel Pérez, Francisco Sastre, and Rubén Medina attempts to rehabilitate the simultaneous-update scheme, but the blinking catastrophe remains pretty catastrophic.)

This is not the only bizarre behavior to be found in the dark corners of Monte Carlo Ising models. In the Metropolis algorithm, Figure 16Graphs of the Metropolis and Glauber acceptance functions at T = 1000, where they both approximate flat horizontal lines, Metropolis at P = 1.0 and Glauber at P = 0.5simply setting the temperature to a very high value (say, \(T = 1{,}000\)) has a similar effect. Again every spin flips on every cycle, producing a display that throbs violently but otherwise remains unchanged. The explanation is laughably simple. At high temperature the Metropolis acceptance function flattens out and yields a spin-flip probability near \(1\) for all sites, no matter what their neighborhood looks like. The Glauber acceptance curve also flattens out, but at a value of 0.5, which leads to a totally randomized lattice—a more plausible high-temperature outcome.

I have not seen this high-temperature anomaly mentioned in published works on the Metropolis algorithm, although it must have been noticed many times over the years. Perhaps it’s not mentioned because this kind of failure will never be seen in physical systems. \(T = 1{,}000\) in the Ising model is \(370\) times the critical temperature; the corresponding temperature in iron would be over \(300{,}000\) kelvins. Iron boils at \(3{,}000\) kelvins.


The curves in Figure 15 and most of the other graphs above are averages taken over hundreds of repetitions of the Monte Carlo process. The averaging operation is meant to act like sandpaper, smoothing out noise in the curves, but it can also grind away interesting features, replacing a population of diverse individuals with a single homogenized exemplar. Figure 17 shows six examples of the lumpy and jumpy trajectories recorded during single runs of the program:

Figure 17six traces of program runs with Metropolis and Glauber algorithms

In these squiggles, magnetization does not grow smoothly or steadily with time. Instead we see spurts of growth followed by plateaus and even episodes of retreat. One of the Metropolis runs is slower than the three Glauber examples, and indeed makes no progress toward a magnetized state. Looking at these plots, it’s tempting to explain them away by saying that the magnetization measurements exhibit high variance. That’s certain true, but it’s not the whole story.

Figure 18 shows the distribution of times needed for a Metropolis Ising model to reach a magnetization of \(0.85\) in response to a sudden shift from \(T = 10\) to \(T= 2\). The histogram records data from \(10{,}000\) program runs, expressing convergence time in Monte Carlo macrosteps.

Figure 18Histogram of times needed for a Metropolis simulation to reach 0.85 magnetication following a sudden drop in temperature from T=10 to T=2.

The median of this distribution is \(451\) macrosteps; in other words, half of the runs concluded in \(451\) steps or fewer. But the other half of the population is spread out over quite a wide range. Runs of \(10\) times the median length are not great rareties, and the blip at the far right end of the \(x\) axis represents the \(59\) runs that had still not reached the threshold after \(10{,}000\) macrosteps (where I stopped counting). This is a heavy-tailed distribution, which appears to be made up of two subpopulations. In one group, forming the sharp peak at the left, magnetization is quick and easy, but members of the other group are recalcitrant, holding out for thousands of steps. I have a hypothesis about what distinguishes those two sets. The short-lived ones are ponds; the stubborn ones that overstay their welcome are rivers.

When an Ising system cools and becomes fully magnetized, it goes from a salt-and-pepper array of tiny clusters to a monochromatic expanse of one color or the other. At some point during this process, there must be a moment when the lattice is divided into exactly two regions, one light and one dark. Figure 19Circular pond of indigo cells in the middle of a mauve landmass. Figure 19 shows one possible config­uration: A pond of indigo cells lies within a mauve landmass that covers the rest of the lattice. If the system is to make further prog­ress toward full magneti­zation, either the pond must dry up (leaving a blank expanse of mauve), or the pond must overflow its banks, inundating the remaining land area (leaving a sea of indigo). Experi­ments reveal that the former out­come is over­whelmingly more likely. Why? One guess is that it’s just a matter of majority rule: Whichever patch con­trols the greater amount of territory will even­tually prevail. But that’s not it. Even when the pond covers most of the lattice, leaving only a thin strip of shoreline at the edges, the surrounding land eventually squeezes the pond out of existence.

I believe the correct answer has to do with the concepts of inside and outside, connected and disconnected, open sets and closed sets—but I can’t articulate these ideas in a way that would pass mathematical muster. I want to say that the indigo pond is a bounded region, entirely enclosed by the unbounded mauve continent. But the wraparound lattice make it hard to wrap your head around this notion. The two images in Figure 20 show exactly the same object as Figure 19, the only difference being that the origin of the coordinate system has moved, so that the center of the disk seems to lie on an edge or in a corner of the lattice. The indigo pond is still surrounded by the mauve continent, but it sure doesn’t look that way. In any case, why should boundedness determine which area survives the Monte Carlo process?

Figure 20The same circular blob with its center point shifted to an edge or a corner of the toroidal lattice.

For me, the distinction between inside and outside began to make sense when I tried taking a more “local” view of the boundaries between regions, and the curvature of those boundaries. As noted in connection with Figure 11, boundaries are places where you can expect to find neutral lattice sites (i.e., \(\Delta E = 0\)), which are the only sites where a spin is likely to change orientation at low temperature.Figure 21Square pond In honor of their neutrality I’m going to call these sites Swiss cells. In Figure 21 I have marked all the Swiss cells with colored dots (making them dotted Swiss!). Orange-dotted cells lie in the indigo interior of the pond, whereas green dots lie outside on the mauve landmass.

I’ll spare you the trouble of counting the dots in Figure 21: There are 34 orange ones inside the pond but only 30 green ones outside. That margin could be significant. Because the dotted cells are likely to change state, the greater abundance of orange dots means there are more indigo cells ready to turn mauve than mauve cells that might become indigo. If the bias continues as the system evolves, the indigo region will steadily lose area and eventually be swallowed up.

But is there any reason to think the interior of the pond will always have a surplus of neutral sites susceptible to flipping? Figure 22Square pond The simplest geometry for a pond is a square or rectangle, as in Figure 22. It has four interior Swiss cells (in the corners) and no exterior Swiss cells—which would be marked with green dots if they existed. In other words, no mauve cells along the outside boundary of a square or rectangular pond have exactly two mauve and two indigo neighbors.

What if the shape becomes a little more complicated? Perhaps the square pond grows a protuberance on one side, and an invagination on another, as in Figure 23. Figure 23Pond with innie and outie Each of these modifications generates a pair of orange-dotted neutral sites inside the patch, along with a pair of green-dotted ones outside. Thus the count of inside minus out­side remains unchanged at four. On first noticing this invariance I had a delicious Aha! moment. There’s a con­ser­vation law, I thought. No matter how you alter the outline of the pond, the neutral sites inside will out­number those out­side by four.

This notion is not utterly crazy. If you walk clockwise around the boundary of the simple square pond in Figure 22, you will have made four right turns by the time you get back to your starting point. Each of those right turns creates a neutral cell in the interior of the pond—we’ll call them innie turns—where you can place an orange dot. A clockwise circuit of the modified pond in Figure 23, with its excrenscences and increscences, requires some left turns as well as right turns. Figure 24Jigsaw puzzle pondEach left turn produces a neutral cell in the mauve exterior region—it’s an outie turn—earning a green dot. But for every outie turn added to the perimeter, you’ll have to make an additional innie turn in order to get back to your starting point. Thus, except for the four original corners, innie and outie turns are like particles and antiparticles, always created and annihilated in pairs. A closed path, no matter how convoluted, always has exactly four more innies than outies. The four-turn differential is again on exhibit in the more elaborate example of Figure 24, where the orange dots prevail 17 to 13. Indeed, I assert that there are always four more innie turns than outie turns on the perimeter of any simple (i.e., non-self-intersecting) closed curve on the square lattice. (I think this is one of those statements that is obviously true but not so simple to prove, like the claim that every simple closed curve on the plane has an inside and an outside.)

Unfortunately, even if the statement about counting right and left turns is true, the corresponding statement about orange and green dots is not. Figure 25Pond with hair and poresIt holds only for a rather special subclass of lattice shapes, namely those in which the perimeter line not only has no self-intersections but never comes within one lattice spacing of itself. In effect, we exclude all closed figures that have hair on their surface or pores in their skin. Figure 25 shows an example of a shape that violates the rule. Narrow, single-lane passages create neutral sites that do not come in matched innie/outie pairs. In this case there are more green dots than orange ones, which might be taken to suggest that the indigo area will grow rather than shrink.

In my effort to explain why ponds always evaporate, I seem to have reached a dead end. I should have known from the outset that the effort was doomed. I can’t prove that ponds always shrink because they don’t. The system is ergodic: Any state can be reached from any other state in a finite number of steps. In particular, a single indigo cell (a very small pond) can grow to cover the whole lattice. The sequence of steps needed to make that happen is improbable, but it certainly exists.

If proof is out of reach, maybe we can at least persuade ourselves that the pond shrinks with high probability. And we have a tool for doing just that: It’s called the Monte Carlo method. Figure 26 follows the fate of a \(25 \times 25\) square pond embedded in an otherwise blank lattice of \(100 \times 100\) sites, evolving under Glauber dynamics at a very low temperature \((T = 0.1)\). The uppermost curve, in indigo, shows the steady evaporation of the pond, dropping from the original \(625\) sites to \(0\) after about \(320\) macrosteps. The middle traces record the abundance of Swiss sites, orange for those inside the pond and green for those outside. Because of the low temperature, these are the only sites that have any appreciable likelihood of flipping. The black trace at the bottom is the difference between orange and green. For the most part it hovers at \(+4\), never exceeding that value but seldom falling much below it, and making only one brief foray into negative territory. Statistically speaking, the system appears to vindicate the innie/outie hypothesis. The pond shrinks because there are almost always more flippable spins inside than outside.

Figure 26Pond shrinkage with innie and outie neutral sites

Figure 26 is based on a single run of the Monte Carlo algorithm. Figure 27 presents essentially the same data averaged over \(1{,}000\) Monte Carlo runs under the same con­ditions—starting with a \(25 \times 25\) square pond and applying Glauber dynamics at \(T = 0.1\).

Figure 27Average rate of pond shrinkage

The pond’s loss of area follows a remarkably linear path, with a steady rate very close to two lattice sites per Monte Carlo macrostep. And it’s clear that virtually all of these pondlike blocks of indigo cells disappear within a little more than \(300\) macrosteps, putting them in the tall peak at the short end of the lifetime distribution in Figure 18. None of them contribute to the long tail that extends out past \(10{,}000\) steps.


So much for the quick-drying ponds. What about the long-flowing rivers?

Figure 28From pond to riverWe can convert a pond into a river. Take a square block of dark pixels and tug on one side, stretching the square into an elongated rectangle. When the moving side of the rectangle reaches the edge of the lattice, just keep on pulling. Because of the model’s wraparound boundary conditions, the lattice actually has no edge; when an object exits to the right, it immediately re-enters on the left. Thus you can keep pulling the (former) right side of the rectangle until it meets up with the (former) left side.

When the two sides join, everything changes. It’s not just a matter of adjusting the shape and size of the rectangle. There is no more rectangle! By definition, a rectangle has four sides and four right-angle corners. The object now occupying the lattice has only two sides and no corners. It may appear to have corners at the far left and right, but that’s an artifact of drawing the figure on a flat plane. It really lives on a torus, and the band of indigo cells is like a ring of chocolate icing that goes all the way around the doughnut. Or it’s a river—an endless river. You can walk along either bank as far as you wish, and you’ll never find a place to cross.

The topological difference between a pond and a river has dire consequences for Monte Carlo simu­lations of the Ising model. When the rectangle’s four corners disappeared, so did the four orange dots marking interior Swiss cells. Indeed, the river has no neutral cells at all, neither inside nor outside. At temperatures near zero, where neutral cells are the only ones that ever change state, the river becomes an all-but-eternal feature. The Monte Carlo process has no effect on it. The system is stuck in a metastable state, with no practical way to reach the lower-energy state of full magnetization.

When I first noticed how a river can block magnetization, I went looking to see what others might have written about the phenomonon. I found nothing. There was lots of talk about metastability in general, but none of the sources I consulted mentioned this particular topological impediment. I began to worry that I had made some blunder in programming or had misinterpreted what I was seeing. Finally I stumbled on a 2002 paper by Spirin, Krapivsky, and Redner that reports essentially the same observations and extends the discussion to three dimensions, where the problem is even worse.

A river with perfectly straight banks looks rather unnatural—more like a canal.Figure 29Sinusoidal river Perhaps adding some meanders would make a difference in the outcome? The upper part of Figure 29 shows a river with a sinusoidal bend. The curves create 46 interior neutral cells and an exactly equal number of exterior ones. These corner points serve as hand­holds where the Monte Carlo process can get a grip, so one might hope that by flipping some of these spins the channel will narrow and eventually close.

But that’s not what happens. The lower part of Figure 29 shows the same stretch of river after \(1{,}000\) Monte Carlo macrosteps at \(T = 0.1\). The algorithm has not amplified the sinuosity; on the contrary, it has shaved off the peaks and filled in the troughs, generally flattening the terrain. After \(5{,}000\) steps the river has returned to a perfectly straight course. No neutral cells remain, so no further change can be expected in any human time frame.

The presence or absence of four corners makes all the difference between ponds and rivers. Ponds shrink because the corners create a consistent bias: Sites subject to flipping are more numerous inside than outside, which means, over the long run, that the outside steadily encroaches on the inside’s territory. That bias does not exist for rivers, where the number of interior and exterior neutral sites is equal on average. Figure 30 records the inside-minus-outside difference for the first \(1{,}000\) steps of a simulation beginning with a sinusoidal river.

Figure 30Innie minus outie for a river

The difference hovers near zero, though with short-lived excursions both above and below; the mean value is \(+0.062\).

Even at somewhat higher temperatures, any pattern that crosses from one side of the grid to the other will stubbornly resist extinction. Figure 31 shows snapshots every \(1{,}000\) macrosteps in the evolution of a lattice at \(T = 1.0\), which is well below the critical temperature but high enough to allow a few energetically unfavorable spin flips. The starting configuration was a sinusoidal river, but by \(1{,}000\) steps it has already become a thick, lumpy band. In subsequent snapshots the ribbon grows thicker and thinner, migrates up and down—and then abruptly disappears, sometime between the \(8{,}000\)th and the \(9{,}000\)th macrostep.

Figure 31Extinction of a river at T=1 nine panels

Swiss cells, with equal numbers of friends and enemies among their neighbors, appear wherever a boundary line takes a turn. All the rest of the sites along a boundary—on both sides—have three friendly neighbors and one enemy neighbor. At a site of this kind, flipping a spin carries an energy penalty of \(\Delta E = +4\). At \(T = 1.0\) the probability of such an event is roughly \(1/50\). In a \(10{,}000\)-site lattice crossed by a river there can be as many as \(200\) of these three-against-one sites, so we can expect to see a few of them flip during every macrostep. Thus at \(T = 1.0\) the river is not a completely static formation, as it is at temperatures closer to zero. The channel can shift or twist, grow wider or narrower. But these motions are glacially slow, not only because they depend on somewhat rare events but also because the probabilities are unbiased. At every step the river is equally likely to grow wider or narrower; on average, it goes nowhere.

In one last gesture to support my claim that short-lived patterns are ponds and long-lived patterns are rivers I offer Figure 32:

Figure 32Pond and river lifetime histograms

Details: The simulations were run with Glauber dynamics on a \(50 \times 50\) lattice.In the red upper histogram a thousand Ising systems are launched from a square pond state and allowed to run until the initial pattern is annihilated and the lattice is approaching full magnetization. In the blue lower histogram another thousand systems begin with a sinuous river pattern and also continue until the pattern collapses and the lattice is nearly uniform. The distribution of lifetimes for the two cases is dramatically different. No pond lasts as long as \(1{,}000\) macrosteps, and the median lifetime is \(454\) steps. The median for rivers is more than \(16{,}000\) steps, and some instances go on for more than \(100{,}000\) steps, the limit where I stopped counting.


A troubling question is whether these uncrossable rivers that block full magnetization in Ising models also exist in physical ferromagnets. It seems unlikely. The rivers I describe above owe their existence to the models’ wraparound boundary conditions. The crystal lattices of real magnetic materials do not share that topology. Thus it seems that metastability may be an artifact or a mere incidental feature of the model, not something present in nature.

Statistical mechanics is generally formulated in terms of systems without boundaries. You construct a theory of \(N\) particles, but it’s truly valid only in the “thermodynamic limit,” where \(N \to \infty\). Under this regime the two-dimensional Ising model would be studied on a lattice extending over an infinite plane. Computer models can’t do that, and so we wind up with tricks like wraparound boundary conditions, which can be considered a hack for faking infinity.

It’s a pretty good hack. As in an infinite lattice, every site has the same local environment, with exactly four neighbors, who also have four neighbors, and so on. There are no edges or corners that require special treatment. For these reasons wraparound or periodic boundary conditions have always been the most popular choice for computational models in the sciences, going back at least as far as 1953. Still, there are glitches. If you were standing on a wraparound lattice, you could walk due north forever, but you’d keep passing your starting point again and again. If you looked into the distance, you’d see the back of your own head. For the Ising model, perhaps the most salient fact is this: On a genuinely infinite plane, every simple, finite, closed curve is a pond; no finite structure can behave like a river, transecting the entire surface so that you can’t get around it. Thus the wraparound model differs from the infinite one in ways that may well alter important conclusions.

These defects are a little worrisome. On the other hand, physical ferromagnets are also mere finite approximations to the unbounded spaces of thermodynamic theory. A single magnetic domain might have \(10^{20}\) atoms, which is large compared with the \(10^4\) sites in the models presented here, but it’s a long ways short of infinity. The domains have boundaries, which can have a major influence on their properties. All in all, it seems like a good idea to explore the space of possibile boundary conditions, including some alternatives to the wraparound convention. Hence Program 4:

An extra row of cells around the perimeter of the lattice serves to make the boundary conditions visible in this simulation. The cells in this halo layer are not active participants in the Ising process; they serve as neighbors to the cells on the periphery of the lattice, but their own states are not updated by the Monte Carlo algorithm. To mark their special role, their up and down states are indicated by red and pink instead of indigo and mauve.

The behavior of wraparound boundaries is already familiar. If you examine the red/pink stripe along the right edge of the grid, you will see that it matches the leftmost indigo/mauve column. Similar relations determine the patterns along the other edges.

The two simplest alternatives to the wraparound scheme are static borders made up of cells that are always up or always down. You can probably guess how they will affect the outcome of the simulation. Try setting the temperature around 1.5 or 2.0, then click back and forth between all up and all down as the program runs. The border color quickly invades the interior space, encircling a pond of the opposite color and eventually squeezing it down to nothing. Switching to the opposite border color brings an immediate re-enactment of the same scene with all colors reversed. The biases are blatant.

Another idea is to assign the border cells random values, chosen independently and with equal probability. A new assignment is made after every macrostep. Randomness is akin to high temperature, so this choice of boundary condition amounts to an Ising lattice surrounded by a ring of fire. There is no bias in favor of up or down, but the stimulation from the sizzling periphery creates recurrent disturbances even at temperatures near zero, so the system never attains a stable state of full magnetization.

Before I launched this project, my leading candidate for a better boundary condition was a zero border. This choice is equivalent to an “open” or “free” boundary, or to no boundary at all—a universe that just ends in blankness. Implementing open boundaries is slightly irksome because cells on the verge of nothingness require special handling: Those along the edges have only three neighbors, and those in the corners only two. A zero boundary produces the same effect as a free boundary without altering the neighbor-counting rules. The cells of the outer ring all have a numerical value of \(0\), indicated by gray. For the interior cells with numerical values of \(+1\) and \(-1\), the zero cells act as neighbors without actually contributing to the \(\Delta E\) calculations that determine whether or not a spin flips.

The zero boundary introduces no bias favoring up or down, it doesn’t heat or cool the system, and it doesn’t tamper with the topology, which remains a simple square embedded in a flat plane. Sounds ideal, no? However, it turns out the zero boundary has a lot in common with wraparound borders. In particular, it allows persistent rivers to form—or maybe I should call them lakes. I didn’t see this coming before I tried the experiment, but it’s not hard to understand what’s happening. On the wraparound lattice, elongating a rectangle until two opposite edges meet eliminates the Swiss cells at the four corners. The same thing happens when a rectangle extends all the way across a lattice with zero borders. The corner cells, now up against the border, no longer have two friendly and two enemy neighbors; instead they have two friends, one enemy, and one cel of spin zero, for a net \(\Delta E\) of \(+1\).

A pleasant surprise of these experiments was the boundary type I have labeled sampled. The idea is to make the boundary match the statistics of the interior of the lattice, but without regard to the geometry of any patterns there. For each border cell \(b\) we select an interior cell \(s\) at random, and assign the color of \(s\) to \(b\). The procedure is repeated after each macrostep. The border therefore maintains the same up/down proportion as the interior lattice, and always favors the majority. If the spins are evenly split between mauve and indigo, the border region shows no bias; as soon as the balance begins to tip, however, the border shifts in the same direction, supporting and hastening the trend.

If you tend to root for the underdog, this rule is not for you—but we can turn it upside down, assigning a color opposite that of a randomly chosen interior cell. The result is interesting. Magnetization is held near \(0\), but at low temperature the local correlation coefficient approaches \(1\). The lattice devolves into two large blobs of no particular shape that circle the ring like wary wrestlers, then eventually reach a stable truce in which they split the territory either vertically or horizontally. This behavior has no obvious bearing on ferromagnetism, but maybe there’s an apt analogy somewhere in the social or political sciences.

The curves in Figure 33 record the response to a sudden temperature step in systems using each of six boundary conditions. The all-up and all-down boundaries converge the fastest—which is no surprise, since they put a thumb on the scale. The response of the sampled boundary is also quick, reflecting its weathervane policy of supporting the majority. The random and zero boundaries are the slowest; they follow identical trajectories, and I don’t know why. Wraparound is right in the middle of the pack. All of these results are for Glauber dynamics, but the curves for the Metropolis algorithm are very similar.

Figure 33Boundary condition graph Glauber

The menu in Program 4 has one more choice, labeled twisted. I wrote the code for this one in response to the question, “I wonder what would happen if…?” Twisted is the same as wraparound, except that one side is given a half-twist before it is mated with the opposite edge. Thus if you stand on the right edge near the top of the lattice and walk off to the right, you will re-enter on the left near the bottom. The object formed in this way is not a torus but a Klein bottle—a “nonorientable surface without boundary.” All I’m going to say about running an Ising model on this surface is that the results are not nearly as weird as I expected. See for yourself.


I have one more toy to present for your amusement: the MCMC microscope. It was the last program I wrote, but it should have been the first.

All of the programs above produce movies with one frame per macrostep. In that high-speed, high-altitude view it can be hard to see how individual lattice sites are treated by the algorithm. The MCMC microscope provides a slo-mo close-up, showing the evolution of a Monte Carlo Ising system one microstep at a time. The algorithm proceeds from site to site (in an order determined by the visitation sequence) and either flips the spin or not (according to the acceptance function).

As the algorithm proceeds, the site currently under examination is marked by a hot-pink outline. Sites that have yet to be visited are rendered in the usual indigo or mauve; those that have already had their turn are shown in shades of gray. The Microstep button advances the algorithm to the next site (determined by the visitation sequence) and either flips the spin or leaves it as-is (according to the acceptance function). The Macrostep button performs a full sweep of the lattice and then pauses; the Run button invokes a continuing series of microsteps at a somewhat faster pace. Some adjustments to this protocol are needed for the simultaneous update option. In this mode no spins are changed during the scan of the lattice, but those that will change are marked with a small square of constrasting gray. At the end of the macrostep, all the changes are made at once.

The Dotted Swiss checkbox paints orange and green dots on neutral cells (those with equal numbers of friendly and enemy neighbors). Doodle mode allows you to draw on the lattice via mouse clicks and thereby set up a specific initial pattern.

I’ve found it illuminating to draw simple geometric figures in doodle mode, then watch as they are transformed and ultimately destroyed by the various algorithms. These experiments are particularly interesting with the Metropolis algorithm at very low temperature. Under these conditions the Monte Carlo process—despite its roots in randomness—becomes very nearly deterministic. Cells with \(\Delta E \le 0\) always flip; other cells never do. (What, never? Well, hardly ever.) Thus we can speak of what happens when a program is run, rather than just describing the probability distribution of possible outcomes.

Here’s a recipe to try: Set the temperature to its lower limit, choose doodle mode, down initialization, typewriter visitation, and the M-rule acceptance function. Now draw some straight lines on the grid in four orientations: vertical, horizontal, and along both diagonals. Each line can be six or seven cells long, but don’t let them touch. Lines in three of the four orientations are immediately erased when the program runs; they disappear after the first macrostep. The one survivor is the diagonal line oriented from lower left to upper right, or southwest to northeast. With each macrostep the line migrates one cell to the left, and also loses one site at the bottom. This combination of changes gives the subjective impression that the pattern is moving not only left but also upward. I’m pretty sure that this phenomenon is responsible for the fluttering wings illusion seen at much higher temperatures (and higher animation speeds).

If you perform the same experiment with the diagonal visitation order, you’ll see exactly the same outcomes. A question I can’t answer is whether there is any pattern that serves to discriminate between the typewriter and diagonal orders. What I’m seeking is some arrangement of indigo cells on a mauve background that I could draw on the grid and then look away while you ran one algorithm or the other for some fixed number of macrosteps (which I get to specify). Afterwards, I win if I can tell which visitation sequence you chose.

The checkerboard algorithm is also worth trying with the four line orientations. The eventual outcome is the same, but the intermediate stages are quite different.


Finally I offer a few historical questions that seem hard to settle, and some philosophical musings on what it all means.

How did the method get the name “Monte Carlo”?

The name, of course, is an allusion to the famous casino, a prodigious producer and consumer of randomness. Nicholas Metropolis claimed credit for coming up with the term. In a 1987 retrospective he wrote:

It was at that time [spring of 1947] that I suggested an obvious name for the statistical method—a suggestion not unrelated to the fact that Stan [Ulam] had an uncle who would borrow money from relatives because he “just had to go to Monte Carlo.”

An oddity of this story is that Metropolis was not at Los Alamos in 1947. He left after the war and didn’t return until 1948.

Ulam’s account of the matter does not contradict the Metropolis version, Marshall Rosenbluth does contradict Metropolis, writing: “The basic idea, as well as the name was due to Stan Ulam originally.” But Rosenbluth wasn’t at Los Alamos in 1947 either.but it’s less colorful. In his autobiography, written in the 1970s, he mentions an uncle who is buried in Monte Carlo, but he says nothing about the uncle gambling or borrowing from relatives. (In fact it seems the uncle was the wealthiest member of the family.) Ulam’s only comment on the name reads as follows:

It seems to me that the name Monte Carlo contributed very much to the popularization of this procedure. It was named Monte Carlo because of the element of chance, the production of random numbers with which to play the suitable games.

Note the anonymous passive voice: “It was named…,” with no hint of by whom. If Ulam was so carefully noncommittal, who am I to insist on a definite answer?

As far as I know, the phrase “Monte Carlo method” first appeared in public print in 1949, in an article co-authored by Metropolis and Ulam. Presumably the term was in use earlier among the denizens of the Los Alamos laboratory. Daniel McCracken, in a 1955 Scientific American article, said it was a code word invented for security reasons. This is not implausible. Code words were definitely a thing at Los Alamos (the place itself was designated “Project Y”), but I’ve never seen the code word status of “Monte Carlo” corroborated by anyone with first-hand knowledge.

Who invented the Metropolis algorithm?

To raise the question, of course, is to hint that it was not Metropolis.

The 1953 paper that introduced Markov chain Monte Carlo, “Equation of State Calculations by Fast Computing Machines,” had five authors, who were listed in alpha­betical order: Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. The two Rosenbluths were wife and husband, as were the two Tellers. Who did what in this complicated collaboration? Apparently no one thought to ask that question until 2003, when J. E. Gubernatis of Los Alamos was planning a symposium to mark the 50th anniversary of MCMC. He got in touch with Marshall Rosenbluth, who was then in poor health. Nevertheless, Rosenbluth attended the gathering, gave a talk, and sat for an interview. (He died a few months later.)

According to Rosenbluth, the basic idea behind MCMC—sampling the states of a system according to their Boltzmann weight, while following a Markov chain from one state to the next—came from Edward Teller. Augusta Teller wrote a first draft of a computer program to implement the idea. Then the Rosenbluths took over. In particular, it was Arianna Rosenbluth who wrote the program that produced all the results reported in the 1953 paper. Gubernatis adds:

Marshall’s recounting of the development of the Metropolis algorithm first of all made it very clear that Metropolis played no role in its development other than providing computer time.

In his interview, Rosenbluth was even blunter: “Metropolis was boss of the computer laboratory. We never had a single scientific discussion with him.”

These comments paint a rather unsavory portrait of Metropolis as a credit mooch. I don’t know to what extent that harsh verdict might be justified. In his own writings, Metropolis makes no overt claims about his contributions to the work. On the other hand, he also makes no disclaimers; he never suggests that someone else’s name might be more appropriately attached to the algorithm.

An interesting further question is who actually wrote the 1953 paper—who put the words together on the page. Internal textual evidence suggests there were at least two writers. Halfway through the article there’s a sudden change of tone, from gentle exposition to merciless technicality.

In recent years the algorithm has acquired the hyphenated moniker Metropolis-Hastings, acknowledging the contributions of W. Keith Hastings, a Canadian mathema­tician and statistician. Hastings wrote a 1970 paper that generalized the method, showing it could be applied to a wider class of problems, with probability distributions other than Boltzmann’s. Hastings is also given credit for rescuing the technique from captivity among the physicists and bringing it home to statistics, although it was another 20 years before the statistics community took much notice.

I don’t know who started the movement to name the generalized algorithm “Metropolis-Hastings.” The hyphenated term was already fairly well established by 1995, when Siddhartha Chib and Edward Greenberg put it in the title of a review article.

Who invented Glauber dynamics?

In this case there is no doubt or controversy about authorship. Glauber wrote the 1963 paper, and he did the work reported in it. On the other hand, Glauber did not invent the Monte Carlo algorithm that now goes by the name “Glauber dynamics.” His aim in tackling the Ising model was to find exact, mathematical solutions, in the tradition of Ising and Onsager. (Those two authors are the only ones cited in Glauber’s paper.) He never mentions Monte Carlo methods or any other computational schemes.

So who did devise the algorithm? The two main ingredients—the G-rule and the random visitation sequence—were already on the table in the 1950s. A form of the G-rule acceptance function \(e^{-\Delta E/T} / (1 + e^{-\Delta E/T})\) was proposed in 1954 by John G. Kirkwood of Yale University, a major figure in statistical mechanics at midcentury. He suggested it to the Los Alamos group as an alternative to the M-rule. Although the suggestion was not taken, the group did acknowledge that it would produce valid simulations. The random visitation sequence was used in a followup study by the Los Alamos group in 1957. (By then the group was led by William W. Wood, who had been a student of Kirkwood.)

Those two ingredients first came together a few years later in work by P. A. Flinn and G. M. McManus, who were then at Westinghouse Research in Pittsburgh. Their 1961 paper describes a computer simulation of an Ising model with both random visitation order and the \(e^{-\Delta E/T} / (1 + e^{-\Delta E/T})\) acceptance function, two years before Glauber’s article appeared. On grounds of publication priority, shouldn’t the Monte Carlo variation be named for Flinn and McManus rather than Glauber?

For a while, it was. There were dozens of references to Flinn and McManus throughout the 1960s and 70s. For example, an article by G. W. Cunningham and P. H. E. Meijer compared and evaluated the two main MCMC methods, identifying them as algorithms introduced by “Metropolis et al.” and by “Flinn and McManus.” A year later another compare-and-contrast article by John P. Valleau and Stuart G. Whittington adopted the same terminology. Neither of these articles mentions Glauber.

According to Semantic Scholar, the phrase “Glauber dynamics” first appeared in the physics literature in 1977, in an article by Ph. A. Martin. But this paper is a theoretical work, with no computational component, along the same lines as Glauber’s own investi­gation. Among the Semantic Scholar listings, “Glauber dynamics” was first mentioned in the context of Monte Carlo studies by A. Sadiq and Kurt Binder, in 1984. After that, the balance shifted strongly toward Glauber.

In bringing up the disappearance of Flinn and McManus from the Ising and Monte Carlo literature, I don’t mean to suggest that Glauber doesn’t deserve his recognition. His main contribution to studies of the Ising model—showing that it could give useful results away from equilibrium—is of the first importance. On the other hand, attaching his name to a Monte Carlo algorithm is unhelpful. If you turn to his 1963 paper to learn about the origin of the algorithm, you’ll be disappointed.

One more oddity. I have been writing the G-rule as

\[\frac{e^{-\Delta E/T}}{1 + e^{-\Delta E/T}},\]

which is the way it appeared in Flinn and McManus, as well as in many recent accounts of the algorithm. However, nothing resembling this expression is to be found in Glauber’s paper. Instead he defined the rule in terms of the hyperbolic tangent. Reconstructing various bits of his mathematics in a form that could serve as a Monte Carlo acceptance function, I come up with:

\[\frac{1}{2}\left(1 -\tanh \frac{\Delta E}{2 T}\right).\]

The two expressions are mathematically synonymous, but the prevalence of the first form suggests that some authors who cite Glauber rather than Flinn and McManus are not getting their notation from the paper they cite.

Who made the first pictures of an Ising lattice?

When I first heard of the Ising model, sometime in the 1970s, I would read statements along the lines of “as the system cools to the critical temperature, fluctuations grow in scale until they span the entire lattice.” I wanted to see what that looked like. What kinds of patterns or textures would appear, and how they would evolve over time? In those days, live motion graphics were too much to ask for, but it seemed reasonable to expect at least a still image, or perhaps a series of them covering a range of temperatures.

In my reading, however, I found no pictures. Part of the reason was surely tech­nological. Turning computations into graphics wasn’t so easy in those days. But I suspect another motive as well. A computational scientist who wanted to be taken seriously was well advised to focus on quantitative results. A graph of magnetization as a function of temperature was worth publishing; a snapshot of a single lattice configuration might seem frivolous—not real physics but a plaything like the Game of Life. Nevertheless, I still yearned to see what it would look like.

In 1979 I had an opportunity to force the issue. I was working with Kenneth G. Wilson, a physicist then at Cornell University, on a Scientific American article about “Problems in Physics with Many Scales of Length.” The problems in question included the Ising model, and I asked Wilson if he could produce pictures showing spin configurations at various temperatures. He resisted; I persisted; a few weeks later I received a fat envelope of fanfold paper, covered in arrays of \(1\)s and \(0\)s. With help from the Scientific American art department the numbers were transformed into black and white squares:

Figure 34Wilson Ising at Tc

This particular image, one of three we published, is at the critical temperature. Wilson credited his colleagues Stephen Shenker and Jan Tobochnik for writing the program that produced it.

The lattice pictures made by Wilson, Shenker, and Tobochnik were the first I had ever seen of an Ising model at work, but they were not the first to be published. In recent weeks I’ve discovered a 1974 paper by P. A. Flinn in which black-and-white spin tableaux form the very centerpiece of the presentation. Flinn discusses aspects of the appearance of these grids that would be very hard to reduce to simple numerical facts:

Phase separation may be seen to occur by the formation and growth of clusters, but they look rather more like “seaweed” than like the roughly round clusters of traditional theory. The structures look somewhat like those observed in phase-separated glass.

I also found one even earlier instance of lattice diagrams, in a 1963 paper by J. R. Beeler, Jr., and J. A. Delaney. Are they the first?

What does the Ising model model?

Modeling calls for a curious mix of verisimilitude and fakery. A miniature steam locomotive chugging along the tracks of a model railroad reproduces in meticulous detail the pistons and linkage rods that drive the wheels of the real locomotive. But in the model it’s the wheels that impart motion to the links and pistons, not the other way around. The model’s true power source is hidden—an electric motor tucked away inside, where the boiler ought to be.

Scientific models also rely on shortcuts and simplifications. In a physics textbook you will meet the ideal gas, the frictionless pendulum, the perfectly elastic spring, the falling body that encounters no air resistance, the planet whose entire mass is concentrated at a dimensionless point. Such idealizations are not necessarily defects. By brushing aside irrelevant details, a good model allows a deeper truth to shine through. The problem, of course, is that some details are not irrelevant.

The Ising model is a fascinating case study in this process. Lenz and Ising set out to explain ferromagnetism, and almost all later discussions of the model (including the one you are reading right now) put some emphasis on that connection. The original aim was to find the simplest framework that would exhibit important properties of real ferromagnets, most notably the sudden onset of magnetization at the Curie temperature. As far as I can tell, the Ising model has failed in this respect. Some of the omitted details were of the essence; quantum mechanics just won’t go away, no matter how much we might like it to. These days, serious students of magnetism seem to have little interest in simple grids of flipping spins. A 2006 review of “modeling, analysis, and numerics of ferromagnetism,” by Martin Kružík and Andreas Prohl, doesn’t even mention the Ising model.

Yet the model remains wildly popular, the subject of hundreds of papers every year. Way back in 1967, Stephen G. Brush wrote that the Ising model had become “the preferred basic theory of all cooperative phenomena.” I’d go even further. I think it’s fair to say the Ising model has become an object of study for its own sake. The quest is to understand the phase diagram of the Ising system itself, whether or not it tells us anything about magnets or other physical phenomena.

Uprooting the Ising system from its ancestral home in physics leaves us with a model that is not a model of anything. It’s like a map of an imaginary territory; there is no ground truth. You can’t check the model’s accuracy by comparing its predictions with the results of experiments.

Seeing the Ising model as a free-floating abstraction, untethered from the material world, is a prospect I find exhilarating. We get to make our own universe—and we’ll do it right this time, won’t we! However, losing touch with physics is also unsettling. On what basis are we to choose between versions of the model, if not through fidelity to nature? Are we to be guided only by taste or convenience? A frequent argument in support of Glauber dynamics is that it seems more “natural” than the Metropolis algorithm. I would go along with that judgment: The random visitation sequence and the smooth, symmetrical curve of the G-rule both seem more like something found in nature than the corresponding Metropolis apparatus. But does naturalness matter if the model is solely a product of human imagination?

Further Reading

Bauer, W. F. 1958. The Monte Carlo method. Journal of the Society for Industrial and Applied Mathematics 6(4):438–451.
http://www.cs.fsu.edu/~mascagni/Bauer_1959_Journal_SIAM.pdf

Beeler, J. R. Jr., and J. A. Delaney. 1963. Order-disorder events produced by single vacancy migration. Physical Review 130(3):962–971.

Binder, Kurt. 1985. The Monte Carlo method for the study of phase transitions: a review of some recent progress. Journal of Computational Physics 59:1–55.

Binder, Kurt, and Dieter W. Heermann. 2002. Monte Carlo Simulation in Statistical Physics: An Introduction. Fourth edition. Berlin: Springer-Verlag.

Brush, Stephen G. 1967. History of the Lenz-Ising model. Reviews of Modern Physics 39:883-893.

Chib, Siddhartha, and Edward Greenberg. 1995. Understanding the Metropolis-Hastings Algorithm. The American Statistician 49(4): 327–335.

Cipra, Barry A. 1987. An introduction to the Ising model. American Mathematical Monthly 94:937-959.

Cunningham, G. W., and P. H. E. Meijer. 1976. A comparison of two Monte Carlo methods for computations in statistical mechanics. Journal Of Computational Physics 20:50-63.

Davies, E. B. 1982. Metastability and the Ising model. Journal of Statistical Physics 27(4):657–675.

Diaconis, Persi. 2009. The Markov chain Monte Carlo revolution. Bulletin of the American Mathematical Society 46(2):179–205.

Eckhardt, R. 1987. Stan Ulam, John von Neumann, and the Monte Carlo method. Los Alamos Science 15:131–137.

Flinn, P. A., and G. M. McManus. 1961. Monte Carlo calculation of the order-disorder transformation in the body-centered cubic lattice. Physical Review 124(1):54–59.

Flinn, P. A. 1974. Monte Carlo calculation of phase separation in a two-dimensional Ising system. Journal of Statistical Physics 10(1):89–97.

Fosdick, L. D. 1963. Monte Carlo computations on the Ising lattice. Methods in Computational Physics 1:245–280.

Geyer, Charles J. 2011. Intorduction to Markov chain Monte Carlo. In Handbook of Markov Chain Monte Carlo, edited by Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, pp. 3–48. Boca Raton: Taylor & Francis.

Glauber, R. J., 1963. Time-dependent statistics of the Ising model. Journal of Mathe­matical Physics 4:294–307.

Gubernatis, James E. (editor). 2003. The Monte Carlo Method in the Physical Sciences: Celebrating the 50th Anniversary of the Metropolis Algorithm. Melville, N.Y.: American Institute of Physics.

Halton, J. H. 1970. A retrospective and prospective survey of the Monte Carlo method. SIAM Review 12(1):1–63.

Hammersley, J. M., and D. C. Handscomb. 1964. Monte Carlo Methods. London: Chapman and Hall.

Hastings, W. K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57(1):97–109.

Hayes, Brian. 2000. Computing science: The world in a spin. American Scientist, Vol. 88, No. 5, September-October 2000, pages 384-388. http://bit-player.org/bph-publications/AmSci-2000-09-Hayes-Ising.pdf

Hitchcock, David B. 2003. A history of the Metropolis–Hastings algorithm. The American Statistician 57(4):254–257. https://doi.org/10.1198/0003130032413

Hurd, Cuthbert C. 1985. A note on early Monte Carlo computations and scientific meetings. Annals of the History of Computing 7(2):141–155.

Ising, Ernst. 1925. Beitrag zur Theorie des Ferromagnetismus. Zeitschrift für Physik 31:253–258.

Janke, Wolfhard, Henrik Christiansen, and Suman Majumder. 2019. Coarsening in the long-range Ising model: Metropolis versus Glauber criterion. Journal of Physics: Conference Series, Volume 1163, International Conference on Computer Simulation in Physics and Beyond 24–27 September 2018, Moscow, Russian Federation. https://iopscience.iop.org/article/10.1088/1742-6596/1163/1/012002

Kobe, S. 1995. Ernst Ising—physicist and teacher. Actas: Noveno Taller Sur de Fisica del Solido, Misión Boroa, Chile, 26–29 April 1995. http://arXiv.org/cond-mat/9605174

Kružík, Martin, and Andreas Prohl. 2006. Recent developments in the modeling, analysis, and numerics of ferromagnetism. SIAM Review 48(3):439–483.

Liu, Jun S. 2004. Monte Carlo Strategies in Scientific Computing. New York: Springer Verlag.

Lu, Wentao T., and F. Y. Fu. 2000. Ising model on nonorientable surfaces: Exact solution for the Möbius strip and the Klein bottle. arXiv: cond-mat/0007325

Martin, Ph. A. 1977. On the stochastic dynamics of Ising models. Journal of Statistical Physics 16(2):149–168.

McCoy, Barry M., and Jean-Marie Maillard. 2012. The importance of the Ising model. Progress in Theoretical Physics 127:791-817. https://arxiv.org/abs/1203.1456v1

McCracken, Daniel D. 1955. The Monte Carlo method. Scientific American 192(5):90–96.

Metropolis, Nicholas, and S. Ulam. 1949. The Monte Carlo method. Journal of the American Statistical Association 247:335–341.

Metropolis, Nicholas, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. 1953. Equation of state calculations by fast computing machines. The Journal of Chemical Physics 21(6):1087–1092.

Metropolis, N. 1987. The beginning of the Monte Carlo method. Los Alamos Science 15:125–130.

Moore, Cristopher, and Stephan Mertens. 2011. The Nature of Computation. Oxford: Oxford University Press.

Onsager, Lars. 1944. Crystal Statistics. I. A two-dimensional model with an order-disorder transition. Physical Review 65:117–149.

Pérez, Gabriel, Francisco Sastre, and Rubén Medina. 2002. Critical exponents for extended dynamical systems with simultaneous updating: the case of the Ising model. Physica D 168–169:318–324.

Peierls, R. 1936. On Ising’s model of ferromagnetism. Proceedings of the Cambridge Philosophical Society, Mathematical and Physical Sciences 32:477–481.

Peskun, P. H. 1973. Optimum Monte-Carlo sampling using Markov chains. Biometrika 60(3):607–612.

Richey, Matthew. 2010. The evolution of Markov chain Monte Carlo methods. American Mathematical Monthly 117:383–413.

Robert, Christian, and George Casella. 2011. A short history of Markov chain Monte Carlo: Subjective recollections from incomplete data. Statistical Science 26(1):102–115. Also appears as a chapter in Handbook of Markov Chain Monte Carlo. Also available as arXiv preprint 0808.2902v7.

Rosenbluth, Marshall N. 2003a. Genesis of the Monte Carlo algorithm for statistical mechanics. AIP Conference Proceedings 690, 22. https://doi.org/10.1063/1.1632112.

Rosenbluth, Marshall N. 2003b. Interviewe with Kai-Henrik Barth, La Jolla, California, August 11, 2003. Niels Bohr Library & Archives, American Institute of Physics, College Park, MD. https://www.aip.org/history-programs/niels-bohr-library/oral-histories/28636-1

Sadiq, A., and K. Binder. 1984. Dynamics of the formation of two-dimensional ordered structures. Journal of Statistical Physics 35(5/6):517–585.

Spirin, V., P. L. Krapivsky, and S. Redner. 2002. Freezing in Ising ferromagnets. Physical Review E 65(1):016119. https://arxiv.org/abs/cond-mat/0105037.

Stigler, Stephen M. 1991. Stochastic simulation in the nineteenth century. Statistical Science 6:89-97.

Stoll, E., K. Binder, and T. Schneider. 1973. Monte Carlo investigation of dynamic critical phenomena in the two-dimensional kinetic Ising model. Physical Review B 8(7):3266–3289.

Tierney, Luke. 1994. Markov chains for exploring posterior distributions. The Annals of Statistics 22(4):1701–1762.

Ulam, Stanislaw M. 1976, 1991. Adventures of a Mathematician. Berkeley: University of California Press.

Valleau, John P., and Stuart G. Whittington. 1977. Monte Carlo in statistical mechanics: choosing between alternative transition matrices. Journal of Computational Physics 24:150–157.

Wilson, Kenneth G. 1979. Problems in physics with many scales of length. Scientific American 241(2):158–179.

Wood, W. W. 1986. Early history of computer simulations in statistical mechanics. In Molecular-Dynamics Simulation of Statistical-Mechanics Systems, edited by G. Ciccotti and W. G. Hoover (North-Holland, New York): pp. 3–14. https://digital.library.unt.edu/ark:/67531/metadc1063911/m1/1/

Wood, William W. 2003. A brief history of the use of the Metropolis method at LANL in the 1950s. AIP Conference Proceedings 690, 39. https://doi.org/10.1063/1.1632115

Charles Petzold: Why Not Applaud Between the Movements?

One

... more ...

Tea Masters: "The middle path is the surest"

This quote in praise of the middle path sounds very Taoist or Buddhist to our contemporary ears. If there's just one thing that we know about Chinese philosophy, it's importance of avoiding excesses and staying mid way. You could argue that this is even translated or symbolized by the name of China, Zhong Guo, the country of the middle!

And yet, I've found this quote in Ovid' poem Metamorphoses. (Book II, verse 137): "Medio tutissimus ibis". Phaeton, son of Phoebus, wants to drive his father's chariot, which pulls the sun across the sky. Phoebus, foolishly, agreed to his son's dangerous wish. Before Phaeton embarks, his father tells him how to hold the reins of his horse and how to drive the chariot. And since the earth needs a moderate temperature, it's safest neither to be too close, nor too far from the ground. Otherwise the soil could burn or freeze. That's why he concludes, very philosophically, that the middle path is the safest!

Wenshan Baozhong

The story ends in tragedy, because the mortal son of a God is not skilled, experienced and powerful enough to pull the sun with his father's chariot. He looses control of his vehicle and dies from his burns. So, even good advice is not enough when are a total beginner. A theoretical knowledge is not the same as what you gain from practice and experience (gongfu).

What else can we learn from this story for our practice of tea (and for life, maybe)! The middle path is especially useful for parameters that are not well defined by the theory, by the rule of the art. Two parameters come to mind:

1. How many leaves should I brew? 
Put too few and the brew will be bland. Put too many and the brew risks becoming bitter, too strong and lacking harmony. There's no definitive answer to this question. It depends on the drinker's taste, the size of the teapot, the kind of tea he's brewing... So, the safest way is to go the middle way: neither too few nor too many. With experience, you'll learn when to put more and when to put less.

2. How long should I brew my tea?
This depends on how many leaves one is brewing and how much aroma is present in the leaves and how quickly it is released. There's no right answer, just some guidelines. The brews should be shorter in the first brews when the leaves are potent and longer at the end as the leaves become weaker. This is the case with Wenshan Baozhong and similarly shaped teas. An exception is the first brew of ball shaped Oolong, because it takes time to open up the rolled leaves. 


When it comes to the brewing temperature, is the middle path the surest? Should your water be neither too hot nor too cold?
This is the surest way to failure, because the brew would always be bland! When you have a clear rule that says that tea is best made with just boiled water, then the surest path is to follow this instruction. Failure won't come from the temperature of the water, but from the other parameters.

However, the middle way works for the question 'how hard or fast should I pour my boiling water on my leaves?'
If you are new to gongfu cha, a medium strength of pouring the just boiled is the safest bet. It's only with experience, trial and error, that you'll slowly learn when to pour slow and when to pour fast, and where and from how high!

In conclusion, we can say that the philosophy of the middle path is common to Western and Eastern classic thought. It applies especially for situations and parameters that we encounter for the first time or for which we don't have any knowledge. However, the more you know, the more you can be bold without risk. It's like driving on the highway: 55 mph is a much safer speed than 25 mph, provided you know how to drive and the road is not congested!
 

new shelton wet/dry: Every day, the same, again

7.jpg

Amazon.com Inc. has won U.S. permission to use radar to monitor consumers’ sleep habits.

Elon Musk’s testimony in Tesla lawsuit paused as lawyer vomits in jury box

Tel Aviv dog owners must now register their dog’s DNA with municipality. This will then allow municipal inspectors to collect samples from dog feces left uncollected in the streets, and a fine will be sent by mail to the owner who did not clean up.

Mother kills husband with boiling water after learning he allegedly sexually abused children for years — Smith killed her husband Michael in such a painful and cruel way. To throw boiling water over someone when they are asleep is absolutely horrific,” said Detective Chief Inspector Paul Hughes. “The sugar placed into the water makes it vicious. It becomes thicker and stickier and sinks into the skin better. It left Michael in agony.

Training Ferrets to Recognize Virus Odor in Duck Droppings

A wet-bulb temperature of 35 °C, or around 95 °F, is pretty much the absolute limit of human tolerance, says Zach Schlader, a physiologist at Indiana University Bloomington. Above that, your body won’t be able to lose heat to the environment efficiently enough to maintain its core temperature. That doesn’t mean the heat will kill you right away, but if you can’t cool down quickly, brain and organ damage will start.

The profile of the alleged abuser, by itself, was unusual: not a priest, but rather a teenage altar boy, who was said to have coerced a peer to engage in various sex acts night after night over six years, inside the Vatican’s own walls. Then powerful church figures helped him become a priest.

The People’s Bank of China aims to become the first major central bank to issue a central bank digital currency. The benefits of an e-currency are immense. As more and more transactions are made using a digital currency controlled centrally, the government gains more and more ability to monitor the economy and its people. […] The rollout is also seen as part of Beijing’s push to weaken the power of the US dollar […] But another crucial motivation is the increasing alarm in Beijing at the size of the crypto industry in China, where a huge amount of cryptocurrency was being “mined” until the recent crackdown. The threat of an unregulated alternative monetary system emerging from blockchain technology is a clear and present danger to the Communist party, according to observers.

‘ethically sourced’ cocaine

Agatha Christie is probably one of the first British ‘stand-up surfers’

The story goes that Tazartès went into the woods, dug a hole, and then sang so loudly that the ducks on the lake began to shake. […] His second album, Tazartès Transports (1980), took this sound further. He collaborated with Jean-Pierre Lentin, editor of the counter-cultural magazine Actuel, who wrote a series of fake ethnographic texts describing the music of invented regions.

Northern Hawk-Cuckoo, Mount Fuji, Japan

Daniel Lemire's blog: Faster sorted array unions by reducing branches

When designing an index, a database or a search engine, you frequently need to compute the union of two sorted sets. When I am not using fancy low-level instructions, I have most commonly computed the union of two sorted sets using the following approach:

    v1 = first value in input 1
    v2 = first value in input 2
    while(....) {
        if(v1 < v2) {
            output v1
            advance v1
        } else if (v1 > v2) {
            output v2
            advance v2
        } else {
           output v1 == v2
           advance v1 and v2
        }
    }

I wrote this code while trying to minimize the load instructions: each input value is loaded exactly once (it is optimal). It is not that load instructions themselves are expensive, but they introduce some latency. It is not clear whether having fewer loads should help, but there is a chance that having more loads could harm the speed if they cannot be scheduled optimally.

One defect with this algorithm is that it requires many branches. Each mispredicted branch comes with a severe penalty on modern superscalar processors with deep pipelines. By the nature of the problem, it is difficult to avoid the mispredictions since the data might be random.

Branches are not necessarily bad. When we try to load data at an unknown address, speculating might be the right strategy: when we get it right, we have our data without any latency! Suppose that I am merging values from [0,1000] with values from [2000,3000], then the branches are perfectly predictable and they will serve us well. But too many mispredictions and we might be on the losing end. You will get a lot of mispredictions if you are trying this algorithm with random data.

Inspired by Andrey Pechkurov, I decided to revisit the problem. Can we use fewer branches?

Mispredicted branches in the above routine will tend to occur when we conditionally jump to a new address in the program. We can try to entice the compiler to favour ‘conditional move’ instructions. Such instructions change the value of a register based on some condition. They avoid the jump and they reduce the penalties due to mispredictions. Given sorted arrays, with no duplicated element, we consider the following code:

while ((pos1 < size1) & (pos2 < size2)) {
    v1 = input1[pos1];
    v2 = input2[pos2];
    output_buffer[pos++] = (v1 <= v2) ? v1 : v2;
    pos1 = (v1 <= v2) ? pos1 + 1 : pos1;
    pos2 = (v1 >= v2) ? pos2 + 1 : pos2;
 }

You can verify by using the assembly output that compilers are good at using conditional-move instructions with this sort of code. In particular, LLVM (clang) does what I would expect. There are still branches, but they are only related to the ‘while’ loop and they are not going to cause a significant number of mispredictions.

Of course, the processor still needs to load the right data. The address only becomes available in a definitive form just as you need to load the value. Yet we need several cycles to complete the load. It is likely to be a bottleneck, even more so in the absence of branches that can be speculated.

Our second algorithm has fewer branches, but it has more loads. Twice as many loads in fact! Modern processors can sustain more than one load per cycle, so it should not be a bottleneck if it can be scheduled well.

Testing this code in the abstract is a bit tricky. Ideally, you’d want code that stresses all code paths. In practice, if you just use random data, you will often have that the intersection between sets are small. Thus the branches are more predictable than they could be. Still, it is maybe good enough for a first benchmarking attempt.

I wrote a benchmark and ran it on the recent Apple processors as well as on an AMD Rome (Zen2) Linux box. I report the average number of nanoseconds per produced element so smaller values are better. With LLVM, there is a sizeable benefit (over 10%) on both the Apple (ARM) processor and the Zen 2 processor.  However, GCC fails to produce efficient code in the branchless mode. Thus if you plan to use the branchless version, you definitively should try compiling with LLVM. If you are a clever programmer, you might find a way to get GCC to produce code like LLVM does: if you do, please share.

system conventional union ‘branchless’ union
Apple M1, LLVM 12 2.5 2.0
AMD Zen 2, GCC 10 3.4 3.7
AMD Zen 2, LLVM 11 3.4 3.0

I expect that this code retires relatively few instructions per cycle. It means that you can probably add extra functionality for free, such as bound checking, because you have cycles to spare. You should be careful not to introduce extra work that gets in the way of the critical path, however.

As usual, your results will vary depending on your compiler and processor. Importantly, I do not claim that the branchless version will always be faster, or even that it is preferable in the real world. For real-world usage, we would like to test on actual data. My C++ code is available: you can check how it works out on your system. You should be able to modify my code to run on your data.

You should expect such a branchless approach to work well when you had lots of mispredicted branches to begin with. If your data is so regular that you a union is effectively trivial, or nearly so, then a conventional approach (with branches) will work better. In my benchmark, I merge ‘random’ data, hence the good results for the branchless approach under the LLVM compiler.

Further reading: For high speed, one would like to use SIMD instructions. If it is interesting to you, please see section 4.3 (Vectorized Unions Between Arrays) in Roaring Bitmaps: Implementation of an Optimized Software Library, Software: Practice and Experience 48 (4), 2018. Unfortunately, SIMD instructions are not always readily available.

new shelton wet/dry: ‘It seems to me that the modern painter cannot express his age, the airplane, the atom bomb, the radio, in the old forms of Renaissance or of any past culture.’ –Jackson Pollock

777.jpg

It was June 2020, and Mr. Hamamoto, a former Goldman Sachs executive who invested in real estate, was searching for a business to take public through a merger with his shell company. He had raised $250 million from big Wall Street investors including BlackRock, and spent more than a year looking at over 100 potential targets. If he couldn’t close a deal soon, he would have to return the money.

Then, around nine months before his deadline, bankers from Goldman gave Mr. Hamamoto an enticing pitch: Lordstown Motors, the fledgling electric truck maker that President Donald J. Trump had hailed as a savior of jobs. What followed was a swift merger, then a debacle that put two of the biggest forces shaping the financial world on a collision course.

Lordstown went public in October via a merger with Mr. Hamamoto’s special purpose acquisition company, DiamondPeak Holdings. A Wall Street innovation, SPACs are all the rage, having raised more than $190 billion from investors since the start of 2020, according to SPACInsider. At the same time, small investors have become a potent force in the markets, driving up the stock prices of companies like GameStop and lapping up shares of SPACs, which are highly speculative and can pose financial risks.

In Lordstown, those forces eventually collided, highlighting the uneven playing field between Wall Street and Main Street. Small investors began piling into Lordstown shares after the merger closed, attracted to the hype around electric vehicles. That’s exactly when BlackRock and other early Wall Street investors — as well as top company executives, who all got their shares cheaply before the merger — began to sell some of their holdings.

Now Lordstown is flailing. Regulators are investigating whether its founder, Steve Burns, who resigned as chief executive in June, overstated claims about truck orders. The heat is on Mr. Hamamoto. The company has burned through hundreds of millions of dollars in cash. Its stock price has plunged to $9, from around $31. Investors are suing, including 70-year-old George Troicky, who lost $864,201 on his investment, according to a pending class-action lawsuit.

And Lordstown has yet to begin producing its first truck.

{ NY Times | Continue reading }

image { Jackson Pollock at work in his studio in 1950 photographed by Hans Namuth }

OCaml Weekly News: OCaml Weekly News, 13 Jul 2021

  1. slug 1.0.0 - URL-safe slug generator
  2. Developer Experience Engineer at Jane Street
  3. Hardcaml MIPS CPU Learning Project and Blog
  4. OCaml for Windows installation confusion
  5. OCamlFormat config file auto-completion support in VSCode
  6. Multicore OCaml: June 2021
  7. memprof-limits (first official release): Memory limits, allocation limits, and thread cancellation, with interrupt-safe resources
  8. Bitwuzla 1.0.0 (SMT solver for AUFBVFP)

Planet Lisp: McCLIM: Progress report #12

Dear Community,

A lot of time passed since the last blog entry. I'm sorry for neglecting this. In this post, I'll try to summarize the past two and a half year.

Finances and bounties

Some of you might have noticed that the bounty program has been suspended. The BountySource platform lost our trust around a year ago when they have changed their ToS to include:

If no Solution is accepted within two years after a Bounty is posted, then the Bounty will be withdrawn, and the amount posted for the Bounty will be retained by Bountysource.

They've quickly retracted from that change, but the trust was already lost. Soon after, I've suspended the account and all donations with this platform were suspended. BountySource refunded all our pending bounties.

All paid bounties were summarized in previous posts. Between 2016-08-16 and 2020-06-16 (46 months of donations) we have collected in total $18700. The Bounty Source comission was 10% collected upon withdrawal - all amounts mentioned below are presented for before the comission was claimed.

During that time $3200 was paid to bounty hunters who solved various issues in McCLIM. The bounty program was a limited success - solutions that were contributed were important, however harder problems with bounties were not solved. That said, a few developers who contribute today to McCLIM joined in the meantime and that might be partially thanks to the bounty program.

When the fundraiser was announced, I've declared I would withdraw $600 monthly from the project account. In the meantime I've had a profitable contract and for two years I stopped withdrawing money. During the remaining three years I've withdrawn $15500 ($440/month) from the account.

As of now we don't have funds and there is no official way to donate money to the project (however, this may change in the near future). I hope that this summary is sufficient regarding the fundraiser. If you have further questions, please don't hesitate to contact me, and I'll do my best to answer them.

I'd like to thank all people who donated to the project. Your financial support made it possible for me to work on the project more than I would be able without it. The fact that people care about McCLIM enough to contribute to it money gives me the motivation and faith that working on the codebase is an important effort that benefits the Common Lisp community.

Improvements

The last update was in 2018-12-31. A lot of changes accumulated in the meantime.

  • Bordered output bug fixes and improvements -- Daniel Kochmański
  • Gadget UX improvements (many of them) -- Jan Moringen
  • Text styles fixes and refactor -- Daniel Kochmański
  • Freetype text renderer improvements -- Elias Mårtenson
  • Extended input stream abstraction rewrite -- Daniel Kochmański
  • Implementation of presentation methods for dialog-views -- admich
  • Encapsulating stream missing methods implementation -- admich
  • indenting-output-stream fixes -- Jan Moringen
  • drawing-tests demo rewrite -- José Ronquillo Rivera
  • Line wrap on the word boundaries -- Daniel Kochmański
  • New margin implementation (extended text formatting) -- Daniel Kochmański
  • Presentation types and presentation translators refactor -- Daniel Kochmański
  • Input completion and accept methods bug fixes and reports -- Howard Shrobe
  • Clipboard implementation (and the selection translators) -- Daniel Kochmański
  • CLIM-Fig demo improvements and bug fixes -- Christoph Keßler
  • The pointer implementation (fix the specification conformance) -- admich
  • Drei kill ring improvements -- Christoph Keßler
  • McCLIM manual improvements -- Jan Moringen
  • Frame icon and pretty name change extensions -- Jan Moringen
  • Cleanups and extensive testing -- Nisar Ahmad
  • pointer-tracking rewrite -- Daniel Kochmański
  • drag-and-drop translators rewrite -- Daniel Kochmański
  • Complete rewrite of the inspector Clouseau -- Jan Moringen
  • Rewrite of the function distribute-event -- Daniel Kochmański and Jan Moringen
  • Adding new tests and organizing them in modules -- Jan Moringen
  • Various fixes to the delayed repaint mechanism -- Jan Moringen
  • CLX backend performance and stability fixes -- Christoph Keßler
  • PS/PDF/Raster backends cleanups and improvements -- Jan Moringen
  • Drei regression fixes and stability improvements -- Nisar Ahmad
  • Geometry module refactor and improvements -- Daniel Kochmański
  • Separating McCLIM code into multiple modules -- Daniel Kochmański and Jan Moringen
  • Frames and frame managers improvements -- Jan Moringen and Daniel Kochmański
  • Frame reinitialization -- Jan Moringen
  • PDF/PS backends functionality improvements -- admich
  • Menu code cleanup -- Jan Moringen
  • Pane geometry and graph formatting fixes -- Nisar Ahmad
  • Numerous CLX cleanups and bug fixes -- Daniel Kochmański and Jan Moringen
  • Render backend stability, performance and functionality fixes -- Jan Moringen
  • Presentation types more strict interpretation -- Daniel Kochmański
  • External Continuous Integration support -- Jan Moringen
  • Continuous Integration support -- Nisar Ahmad
  • Improved macros for recording and table formatting -- Jan Moringen
  • Better option parsing for define-application-frame -- Jan Moringen
  • Separation between the event queue and the stream input buffer -- Daniel Kochmański
  • Examples cleanup -- Jan Moringen
  • Graph formatting cleanup -- Daniel Kochmański
  • Stream panes defined in define-application-frames refactor -- admich
  • Menu bar rewrite (keyboard navigation, click to activate) -- Daniel Kochmański
  • Thread-safe execute-frame-command function -- Daniel Kochmański
  • Mirroring code simplification for clx-derived backends -- Daniel Kochmański
  • Arbitrary native transformations for sheets (i.e. zooming) -- Daniel Kochmański
  • extended-streams event matching improvements -- Jan Moringen
  • Render backend performance improvements -- death
  • drei fixes for various issues -- death
  • drei various cleanups -- Jan Moringen
  • clim-debugger improvements -- Jan Moringen
  • Manual spelling fixes and proofreading -- contrapunctus

This is not an exhaustive list of changes. For more details, please consult the repository history. Many changes I've introduced during this iteration were a subject of a careful (and time-consuming) peer review from Jan Moringen which resulted in a better code quality. Continuous integration provided by Nisar Ahmad definitely made my life simpler. I'd like to thank all contributors for their time and energy spent on improving McCLIM.

Pending work

If you are working on some exciting improvement for McCLIM which is not ready, you may make a "draft" pull request in the McCLIM repository. Currently, there are three such branches:

  • the SLIME-based backend for CLIM by Luke Gorrie

  • the dot-based graph layout extension by Eric Timmons

  • the xrender backend by Daniel Kochmański

Other than that, I've recently implemented the polygon triangulation algorithm that is meant to be used in the xrender backend (but could be reused i.e. for opengl). Currently, I'm refining the new rendering for clx (xrender). After that, I want to introduce a portable double buffering and a new repaint queue. Having these things in place after extensive testing, I want to roll out a new release of McCLIM.

Sincerely yours,
Daniel Kochmański

things magazine: Down, boy

Why Do Electric Cars Look The Way They Do? Because They Can, on the design challenges and opportunities of electric cars (via WITI? and Kottke) / vintage drum machine quiz. Pretty esoteric / exploring the Ice Factory, Grimsby, soon to … Continue reading

the waxing machine: yodaprod:Lorus 1 in One Rainbow watches (1987)



yodaprod:

Lorus 1 in One Rainbow watches (1987)

MattCha's Blog: 2021 The Essence of Tea Youle GaoGan: Structured and Complex

I was so excited about the Essence of Tea’s offering of this 2021 Essence of Tea Youle GaoGan that goes for $248.00 for 200g cake or $1.24/g that I decided to purchase a it with a bunch of other samples.  It was not David’s new 2021 pole dance that aroused my tea buying this time but rather Kathy and David’s listing from a region that is often overlooked by the Western puerh vendor- Youle

Dry leaves have a very frangrant floral almost faint woodiness.  Very fresh leaves…

The first infusion has a nice buttery creamy onset with a clear sweetness that almost turns into a Rice Crispy cereal taste as it disperses over the tongue.  There is a subtle sticky feeling in the mouth and cereal sweet finish in the mouth with a faint coolness deep in the throat.

The second infusion has a pungent almost minty intial burst of flavor that is really delicious it disappears into a Rice Crispy cereal taste there is a long deep cooling in the throat.  The mouthfeeling is really simulating and sticky and there is lots of saliva that returns to the mouth.  The aftertaste is sweet breads and a faint passionfruity aftertaste is found minutes later.  The Qi is really peaceful and the body feeling is very strong like shoulders are coming up to my ears, a floating kind of feeling.  There is a certain vibrating expansive feeling in the body contrasting the calm generated in my mind. Very nice.

The third infusion has a pungent, floral, mild passionfruit and sweet cereal grain taste that come fast all at once over a faint astringency and a full stimulating mouthfeeling with subtle dry throat and strong saliva producing effect.  The minty pungent tastes seems to carry through with the sweet bready cereal tasting lasting the second longest of the flavours in the mouth.  Nice long cool deep throatiness.  The stimulating mouthfeeling with saliva producing is really nice.  The Qi is really calming and the body feelings are quite stimulating. An interesting and engaging yet somehow calmly complimentary experience.  The calming/ relaxing Qi is the typical energy of Youle but these subtle body feelings are a added bonus.



The fourth infusion has a very pungent cool and woody taste with a deep throatiness where the cool pungent resides.  The initial taste is almost minty and has a cereal/floral like taste that turns into a cereal and bread sweetness.  The mouthfeeling is stimulating and saliva producing is good.  Nice calm mind and shoulder and body lifting Qi.

The fifth infusion has a minty pungent onset with some buttery florals and bread sweetness underneath.  The mouthfeeling is a full chalky tightness with a pull of saliva into the mouth.  Nice Relaxing Qi with shoulder releasing bodyfeelings and a subtle vibrating feeling.  Nice focusing feeling.

The sixth infusion has a forest pungent minty onset with a bread/cereal finish, a bit floral, not that fruity sweet.  The mouthfeeling is soft sticky dry with a deep cooling throatfeeling.  There is some melon and floral in the finish.  Kind of a tingling mouthfeeling with less returning saliva and not as deep throat.  The main taste is this prominent minty pungency.  The flat soft sticky dry mouthfeeling has a mild pucker and feels kind of fuzzy in the mouth.  Nice claming Qi with nice flowing shoulders and arms feeling.  Nice combo of calm mind with active but not distracting bodyfeeling.

The seventh infusion has a more juicy almost lemon passionfruity a bit sour taste here.  There is some florals, bread sweetness, a tiny bit of astringency with nice full mouthfeeling of drying and deep cool throat.  Has a finish of pungent coolness and subtle florals almost lemon cake like subtle taste here.  Qi is nice and calm.

The eighth infusion has a juicy lemon floral sour, bread sweet, floral, and cereal taste.  The mouthfeeling is a full sticky dry stimulating feelng with some mid-deep throat cooling.  The throat cooling and depth is loosing its deep feeling in the last few infusions.  The aftertaste is a lemon, floral bread-like sweetness.  The Qi is nice and calming.

The ninth infusion has a fresh zesty lemon floral taste to it which has become the dominant intial taste with sour, bread sweetness and to a lesser extent florals which are mainly intertwined with cool pungency.  The mouthfeeling builds in strength as the infusions go and now is quite puckering and squeaky with a deeper coolness and a lemon pungent forest aftertaste with a bit of saliva producing but not too much anymore.  The mouthfeeling is really nice and will do this puerh well out of the fresh puerh stage its in now.  Nice calm fousing feeling with light limbs. 

10th has a bit of bitterness in it and a bit of sour lemon, floral, bread it transforms into cool pungency that comes from the throat and plants itself onto the tongue where a forest pungency is left in the mouth and throat.  The aftertaste is a bit sour and bready and cool.  Strong tighter throat developing. Engaging tongue coating and throat is tightened at the upper throat.  A bit of dryness here now.  Nice claming focusing Qi. 

11th infusion there is a bready sour bitterness with a bit of cereal and woody and floral.  The mouthfeeling becomes quite dry here so does the throat.  There is not much aftertaste at this point a ghosty cooling and floral on the breath mainly.  The bodyfeelings have dropped off and just a light airy feeling in the arms are left.

12th has a woody floral sweet melon fruity taste with nice cool throat and dry tongue with deep floral melon finish.  Nice relaxing Qi.

13th a bitter buttery floral with a cool forest pugnet endge that comes out of the throat.  Flat dry tongue coating and floral melon aftertaste.  Nice more typical Youle tastes are found later in the session here.  Nice relacing Qi.

Leaves were left in the pot overnight and now I come back to the 14th  and there is fruity buttercup like florals a bit of sour almost mango, thick dry coat mouthfeeling and slight cooling.  There is a heady qi to it now.  The strong tongue coat gives this puerh a solid base.

The 15th is much the same with a sweet floral fruity taste tight dry tongue faint throat with faint cooling.  I image this puerh steeping out like this for a while…



The long mason jar steeping of the spent leaves has a very mild forest pungent taste with almost unnoticeable cooling over a dry tongue sensation.  Not any sweetness left and not much flavor to push out and nice relaxing Qi typical of Youle comes out of this long steep.



Overall, this is a nice Youle with a lot of structure to it.  The first group of steepings are interesting stuff with a calm relaxing qi typical of youle but deeper and more powerful with interesting bodyfeelings paired with that deep calm.  The infusions are stronger in the mouth and really saliva producing with lots of interesting tastes which slowly evolve throughout the corse of the tea session.  These include sour, pungent, floral, bready, cereal, minty, Forest, bitter these appear at different times throughout the session. The minty pungency with Rice Crispy cereal note is one I’ve not come across before and make the first handful of infusions supper tasty…  There is lots in here with stronger drying/sticky/tight tongue coating that can get pretty strong if too many infusions are stacked too close together.  The strong saliva producing along with this mouthfeeling give the complex evolving taste a good base to age.

Vs 2013 Pu-erh.sk Youle.  These are similar quality Youle I think.  They both have complex evolving flavours over a stronger evolving mouthfeel.  Too many years have gone by between sampling but this 2021 might be a bit deeper and complex especially the subtle bodyfeelings.

Thought this might be the best 2021 offering from the Essence of Tea until I tried…

Peace

Edit:

Shah8’s Tasting Notes

Alex’s (Tea Notes) Tasing Notes

Double peace

MattCha's Blog: 2021 Essence of Tea Yiwu Queen of the Forest: Subtle Complexity

 I found it super amusing this year when David and Yingxi started off the description of this tea by saying “We don’t make many blends”. I found it funny because 2 out of their 4 raw 2021 raw puerh releases this year were blends ( both of which I will have the pleasure of reviewing here on the blog)!  This one is blended from 4 Guoyoulin regions in Yiwu and apparently was blended with the good stuff- no filler.  I bought the sample but the cake of this 2021 Essence of Tea Yiwu Queen of the Forest goes for an interesting price point $198.00 for 200g cake or $0.99/g- it’s expensive but not insanely by staying under the $1.00/g.

Dry leaves smell of a sweet kind of sour floral.

The first infusion is a creamy milky silky sweetness with a bit of pond green taste and a finish of slight sour mineral taste.  There is an opening of the throat feeling with a gob of saliva in the back of the throat.  Nice soft silky tongue coating.

The second infusion has a grains and creamy taste initially with a slow developing creamy sweetness.  There is some mineral tastes, sour tastes, salty tastes, woody tastes, with faint deep sweetness that is generated really slowly in the mouth.  The mouthfeeling is a nice silky feeling and there is a deep opening in the throat.  The Qi seems to give me focus now.  A tunnel vision feeling and a certain calm.  I can feel a bit of chest pressure.  A salty woody taste is left in the mouth minutes later.

The third infusion has a fruity sweet pop with toasted grains sweetness with a green tea pond and woody finish, subtle spice taste slow creamy and fruity sweetness, even a faint candy sweetness that arrive more with the gob of saliva that develops in the throat and spills over the back of the tongue.  There is a lot of subtle tastes in there, complex tastes- woody, fruity, silty, green tea pond, mineral, salty, even spicy… its interesting but not overpowering.  Nice focusing Qi and open and heavy chest feeling.  Minutes later there is a salty mineral taste in the mouth.

The fourth infusion has a woody licorice onset with slow cooling and a very faint returning candy sweetness.  The finish is salty and mineral and even a bit bitter.  There is subtle hints of warm spices, slight sour tastes, fruity tastes, all pretty faint.  The mouthfeeling is silky/silty with a bit of gripping now.  The cooled cup has a woody vegetal onset with creamier sweetness, a bit bitter, fruity sweetness more mineral finish.  Nice focusing energy with some subtle neck releasing.



The fifth infusion is left to cool and has a woody pungent creamy sweetness with a metallic fruity mineral soapy finish.  There is a bit more fruitiness initially but still the sweet taste is secondary to salty/ savory tastes.  There is some green tea pond taste, some spice.  The mouthfeeling is silky and the texture is a touch oily.  The throat is more vacuous now with less saliva producing.  Nice focusing energy.  Some mild chest, shoulder heaviness a bit in the neck.  Nice feel good vibe to this puerh.  Although there is lots of substiles going on in this blend it really feels harmonious.

The sixth infusion has a fruity sweet melon and apricot pop of taste followed by a woody mineral faintly bitter taste in the mouth.  The aftertaste is mild but mainly a mineral vegetal taste.  This is not a particularly sweet Yiwu tea, not yet at least.  The mouthfeeling is a soft silky feeling that is restricted to mainly the tongue.  The throat is empty feeling with not as much saliva producing now.  Nice focusing Qi with mild chest/shoulder heaviness, neck feelings.

The seventh infusion has a woody fruity bitter onset with a woody, metallic, soapy, mineral finish.  The pungent and spicy notes have dropped off considerably over the last few infusions and the bitterness is mild but more pronounced. 

The 8th has a savoury green vegetal bitterness with a lesser sweet fruity taste.  There is some woody and bitter and mineral in the finish.  Not much for a sweet aftertaste.  The main taste is a bitter vegetal savory taste.  The mouthfeeling is a bit drying now, a bit sandy on the tongue.  The saliva producing is gone.  Nice focusing relaxing going on, a bit spacy. 

9th has a creamy woody mineral even coco bitter onset that gets a bit creamy sweet but the coco taste now turns less bitter and more milk chocolate and there is a mineral creamy sweetness left behind.  This infusion is particularly delicious.  The mouthfeeling is a slight dry but fuller feeling with pucker.  Nice focusing.

10th has a tangy fruitiness now in the onset and finishes with a flat mineral and vegetal taste with a spicy pungency and tongue tingling taste.  The tongue coating is a tingling flat and throat is not really stimulated as much as slightly open.  Nice focusing Qi.

The 11th infusion has an oily buttery tangy fruity almost sour fruity sweetness with tingling pungent mouthfeeling and sticky coating with open throat.  Not much for aftertastes with this puerh a bit of mineral woody.  Nice focusing Qi.


The 12th infusion the next day gives off a sweet lime floral mineral quality.  The mouthfeeling is soft and the aftertaste is faint.  There is a nice mild uplifting feeling.


13th has a more mineral onset with a faint emerging floral, melon, mineral sweetness.  Mouthfeeling is this nice mild soft silty coating on the tongue.  There is a lime melon taste to this one.  These later infusions show off the high percentage of Mansa area Yiwu in this blend.  I’m thinking this is at least 60% Mansa Yiwu in this blend.


14th is cooled but gives off a green- vegetal, mellow melon mineral taste.  Soft tongue coating. Nice relaxing and chill feeling.


15th has a mineral woody not really sweet taste with a faint cool finish.  Kind of a slippery silty mouthcoating now. I’m surprised that this puerh is still going strong for something pretty subtle!


16th has 20 added to the flash steeping.  Is more bitter woody now with a bit of floral melon green taste.  The bitterness is throughout.



Mug steep of the spent leaves give off a very melon sweetness with lots of floral tanginess and slight sour bitter.  The floral taste is surprisingly complex and the melon sweetness is notable.



Overall this is a super interesting blend and something that I have not experienced before.  What makes it interesting and unique to me is how gentle and subtle the blend is while still offering lots of complexity.  It’s complex but never chaotic or confusing.  It almost seems too light to be interesting but yet each infusion and the progression through the gongfu session reveals these subtle changes that keeps you engaged and wondering what will be behind the next corner.  The Qi is first very focusing for the first handful of infusions, then relaxing, then almost hypotonic, then kind of a chill vibe.  There is a bit of body feeling in a heavy chest and shoulders and a touch of neck release.  There are lots of really subtle tastes but they take turns with only a handful of the middle infusions that feel a bit crowded but never too much.  The last infusions feel really very much like a heavier Mansa area in the blend.  There is not too much sweet taste or aftertaste in this blend right now that some associate with Yiwu.  There is some green, vegetal, pungent/spice, woody, savoury, metals mineral, melon even some mild bitter but surprisingly not much of that creamy sweetness.  The throat is not too involved in this blend and it is definitely the weak point.  The mouthfeeling is also kind of interesting in that it is often soft but has a very silky silty feeling that is super enjoyable.

You can really feel the quality of the material in the blend.  Most of it is this ethereal Gushu Guoyoulin stuff that one it’s own can be really nice but often simple or to clear and clean and lacking complexity and depth.  But when these are blended together the simple complexity of each area becomes a single sound in a very classy entrancing Guoyoulin quartet… the result is strangely satisfying to me.

Alex’s (Tea Notes) Tasting notes

Peace


churchturing.org / 2021-07-24T12:59:42