Bifurcated Rivets: From FB

Jalapeños

Bifurcated Rivets: From FB

Minor Swing

Bifurcated Rivets: From FB

Puttin' on the Ritz

Bifurcated Rivets: From FB

I have a set somewhere

Bifurcated Rivets: From FB

Cambelloti

Recent additions: safe-coloured-text-gen 0.0.0.3

Added by Norfair, 2024-03-28T11:52:49Z.

Recent CPAN uploads - MetaCPAN: Runtime-Debugger-0.16

Easy to use REPL with existing lexical support and DWIM tab completion.

Changes for 0.16

  • 2024-03-28
  • No longer expanding escaped variable in quotes.

Recent additions: mini 1.3.0.0

Added by vicwall, 2024-03-28T11:22:48Z.

Minimal essentials

Hackaday: Hybrid Binaries On Windows for ARM: ARM64EC and ARM64X Explained

With ARM processors increasingly becoming part of the desktop ecosystem, porting code that was written for x86_64 platforms is both necessary and a massive undertaking. For many codebases a simple recompile may be all it takes, but where this is not straightforward Microsoft’s ARM64EC (for ‘Emulator Compatible’) Application Binary Interface (ABI) provides a transition path. Unlike Apple’s ‘Fat Binaries’, this features hybrid PE executables (ARM64 eXtended, or ARM64X) that run mixed ARM64EC and x86_64 binary code on Windows 11 ARM systems. An in-depth explanation is provided by one of the authors, [Darek Mihocka].

ARM64EC was announced by Microsoft on June 28, 2021 as a new feature in Windows 11 for ARM, with more recently Qualcomm putting it forward during the 2024 Game Developers Conference (GDC) as one reason why high-performance gaming on its Snapdragon SoCs should be much easier than often assumed. Naturally, this assumes that Windows 11 is being used, as it contains the x86_64 emulator with ARM64EC support. The major difference between plain ARMv8 and ARM64EC code is that the latter has changes on an ABI level to e.g. calling conventions that ease interoperability between emulated x86_64 and ARM64 code.

Although technologically impressive, Windows 11’s marketshare is still rather small, even before looking at Windows 11 on ARM. It’ll be interesting to see whether Qualcomm’s bravado comes to fruition, and make ARM64EC more relevant for the average software developer.

Recent CPAN uploads - MetaCPAN: Zonemaster-LDNS-4.0.1

Perl wrapper for the ldns DNS library.

Changes for 4.0.1 - 2024-03-28

  • Fixes

Recent CPAN uploads - MetaCPAN: Zonemaster-Backend-11.1.1

A system for running Zonemaster tests asynchronously through an RPC-API

Changes for v11.1.1 - 2024-03-28

  • Fixes

Slashdot: Core PostgreSQL Developer Dies In Airplane Crash

Longtime Slashdot reader kriston writes: Core PostgreSQL developer Simon Riggs dies in airplane crash in Duxford, England. Riggs was the sole occupant of a Cirrus SR22-T which crashed on March 26 after performing touch-and-go maneuvers. Riggs was responsible for much of the enterprise-level features in PostgreSQL, including point-in-time recovery, synchronous replication, and hot standby. He also was the head of the company 2ndQuadrant that provides PostgreSQL support. Riggs' last community contribution was the presentation of the keynote at PostgreSQL Conference Europe 2023 in Prague, which you can watch on YouTube.

Read more of this story at Slashdot.

Recent CPAN uploads - MetaCPAN: Net-OBS-Client-0.1.2

simple OBS API calls

Changes for 0.1.2 - 2024-03-27

  • added Net::OBS::LWP::UserAgent with 'mirror' method
  • multiple configuration parameters for Net::OBS::SigAuth

Recent CPAN uploads - MetaCPAN: App-ansiecho-1.08

Colored echo command using ANSI terminal sequence

Changes for 1.08 - 2024-03-28T09:56:23Z

  • use charnames ':loose', which requires perl 5.16

MetaFilter: Ma,Ma,Ma...Ma,Ma...Look what I can do!

You might be more likely to send a text or email these days, however, some people still use letters to send fan mail. While plenty of celebrities receive messages of adoration, it turns out that you can also send fan mail to the "Mona Lisa." Thanks to a special mailing address as well as a mailslot in the Louvre that's located in the area of the famous artwork that was created by Leonardo da Vinci in 1503, those who feel inclined can write a message to the beloved masterpiece. What could they possibly say in their notes? Artnet explains:

The letters, which the museum stores inside its labyrinthine archive, come in many different shapes and sizes. Some leave unanswerable questions about Leonardo's work, while others ask the polymath for life advice. Others still address their writing to the enigmatic woman in the painting, declaring their love and even asking for her hand in marriage. They write poems, sometimes accompanied by flowers or small mementos. What would you write to the "Mona Lisa" if you were to offer her fan mail? You can do just that by mailing your message using the address below: Musée du Louvre, Service des publics A l'attention de Mona Lisa 75058 Paris Cedex 01, France

Open Culture: The Song From the 1500’s That Blows Rick Beato Away: An Introduction to John Dowland’s Entrancing Music

In 2006, Sting released an album called Songs from the Labyrinth, a collaboration with Bosnian lutenist Edin Karamazov consisting mostly of compositions by Renaissance composer John Dowland. This was regarded by some as rather eccentric, but to listeners familiar with the early music revival that had already been going on for a few decades, it would have been almost too obvious a choice. For Dowland had long since been rediscovered as one of the late sixteenth and early seventeenth century’s musical superstars, thanks in part to the recordings of classical guitarist and lutenist Julian Bream.

“When I was a kid, I went to the public library in Fairport, New York, where I’m from, and I got this Julian Bream record,” says music producer and popular Youtuber Rick Beato (previously featured here on Open Culture) in the video above. Beato describes Bream as “one of the greatest classical guitarists who ever lived” and credits him with having “popularized the classical guitar and the lute and renaissance music.” The particular Bream recording that impressed the young Beato was of a John Dowland composition made exotic by distance in time called “The Earl of Essex Galliard,” a performance of which you can watch on Youtube.

Half a century later, Beato’s enjoyment for this piece seems undiminished — and indeed, so much in evidence that this practically turns into a reaction video. Listening gets him reminiscing about his early Dowland experiences: “I would put on this Julian Bream record of him playing lute, just solo lute, and I would sit there and I would putt” — his father having been golf enthusiast enough to have installed a small indoor putting green — and “imagine living back in the fifteen-hundreds, what it would be like.” These pretend time-travel sessions matured into a genuine interest in early music, one he pursued at the New England Conservatory of Music and beyond.

What a delight it would have been for him, then, to find that Sting had laid down his own version of “The Earl of Essex Galliard,” sometimes otherwise known as “Can She Excuse My Wrongs.” In one especially striking section, Sting takes “the soprano-alto-tenor-bass part” and records the whole thing using only layers of his own voice: “there’s four Stings here,” Beato says, referring to the relevant digitally manipulated scene in the music video, “but there’s actually more than four voices.” Songs from the Labyrinth may only have been a modestly successful album by Sting’s standards, but it has no doubt turned more than a few middle-of-the-road pop fans onto the beauty of English Renaissance music. If Beato’s enthusiasm has also turned a few classic-rock addicts into John Dowland connoisseurs, so much the better.

Related content:

The History of the Guitar: See the Evolution of the Guitar in 7 Instruments

Bach Played Beautifully on the Baroque Lute, by Preeminent Lutenist Evangelina Mascardi

Watch All of Vivaldi’s Four Seasons Performed on Original Baroque Instruments

Hear Classic Rock Songs Played on a Baroque Lute: “A Whiter Shade of Pale,” “While My Guitar Gently Weeps,” “White Room” & More

Renaissance Knives Had Music Engraved on the Blades; Now Hear the Songs Performed by Modern Singers

What Makes This Song Great?: Producer Rick Beato Breaks Down the Greatness of Classic Rock Songs in His New Video Series

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

MetaFilter: "Every day, there were fewer and fewer kings."

The Achilles Trap doubles as a surprisingly sympathetic study of a man who, as his powers slipped away, spent the last decade of his life jerry-rigging monuments of his own magnificence. Coll draws much of his material from extensive interviews with retired American intelligence officers and former members of Saddam's bureaucracy, as well as from a previously unavailable archive of audio tapes from Saddam's own state offices. What emerges is a portrait of Saddam as an eccentric in the mold of G.K. Chesterton—if Chesterton were bloodthirsty, paranoid, and power-mad—a man driven ultimately by deep reverence for the sense that hides beneath nonsense. from Saddam's Secret Weapon, a review of The Achilles Trap by Steve Coll [The American Conservative]

MetaFilter: The Devil - a Life

"In the past nine years, [Nick Cave] has lost two sons – an experience he explores in a shocking, deeply personal new ceramics project. He discusses mercy, forgiveness, making and meaning." A longish interview from this morning's Guardian.

MetaFilter: Animal Hybrids That Exist in Nature

Animal Hybrids That Exist in Nature, From Narlugas to Grolar Bears to Coywolves [Smithsonian Mag]

Hackaday: Automation Makes Traditional Japanese Wood Finishing Easier

Unless you move in architectural circles, you might never have heard of Yakisugi. But as a fence builder, [Lucas] over at Cranktown City sure has, with high-end clients requesting the traditional Japanese wood-finishing method, which requires the outer surface of the wood to be lightly charred. It’s a fantastic look, but it’s a pain to do manually. So, why not automate it?

Now, before we get into a whole thing here, [Lucas] himself notes that what he’s doing isn’t strictly Yakisugi. That would require the use of cypress wood, and charring only one side, neither of which would work for his fence clients. Rather, he’s using regular dimensional lumber which is probably Douglas fir. But the look he’s going for is close enough to traditional Yakisugi that the difference is academic.

To automate the process of burning the wood and subsequently brushing off the loose char, [Lucas] designed a double-barreled propane burner and placed it inside a roughly elliptical chamber big enough to pass a 2×8 — sorry, metric fans; we have no idea how you do dimensional lumber. The board rides through the chamber on a DIY conveyor track, with flame swirling around both sides of the board for an even char. After that, a pair of counter-rotating brushes abrade off the top layer of char, revealing a beautiful, dark finish with swirls of dark grain on a lighter background.

[Lucas] doesn’t mention how much wood he’s able to process with this setup, but it seems a lot easier than the manual equivalent, and likely yields better results. Either way, the results are fantastic, and we suspect once people see his work he’ll be getting more than enough jobs to justify the investment.

Open Culture: The Beautiful Art of Making Japanese Calligraphy Ink Out of Soot & Glue

Founded in 1577, Kobaien remains Japan’s oldest manufacturer of sumi ink sticks. Made of soot and animal glue, the ink stick—when ground against an inkstone, with a little water added—produces a beautiful black ink used by Japanese calligraphers. And, often, a 200-gram ink stick from Kobaien can cost over $1,000.

How can soot and animal glue command such a high price? As the Business Insider video above shows, there’s a fine art to making each ingredient—an art honed over the centuries. Watching the artisans make the soot alone, you immediately appreciate the complexity beneath the apparent simplicity. When you’re done watching how the ink gets made, you’ll undoubtedly want to watch the artisans making calligraphy brushes, an art form that has its own fascinating history. Enjoy!

Related Content 

Download 215,000 Japanese Woodblock Prints by Masters Spanning the Tradition’s 350-Year History

Learn Calligraphy from Lloyd Reynolds, the Teacher of Steve Jobs’ Own Famously Inspiring Calligraphy Teacher

The Model Book of Calligraphy (1561–1596): A Stunningly Detailed Illuminated Manuscript Created over Three Decades

Recent additions: geoip2 0.4.1.2

Added by ondrap, 2024-03-28T07:45:57Z.

Pure haskell interface to MaxMind GeoIP database

Disquiet: Disquiet Junto Project 0639: Center (4 of 3)

Each Thursday in the Disquiet Junto music community, a new compositional challenge is set before the group’s members, who then have five days to record and upload a track in response to the project instructions.

Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. The Junto is weekly so that you know it’s there, every Thursday through Monday, when your time and interest align.

Tracks are added to the SoundCloud playlist for the duration of the project. Additional (non-SoundCloud) tracks appear in the lllllll.co discussion thread.

These following instructions went to the group email list (via juntoletter.disquiet.com). 

Disquiet Junto Project 0639: Center (4 of 3)
The Assignment: Rework musical trios using the constituent parts — one of two ways.

These instructions are fairly lengthy. Please read carefully. Originally this trios sequence of projects would have ended with the completion of the trios after three consecutive weeks. However, those went particularly well, so let’s see if we can push it a little further.

Please note: You needn’t have participated in any of the prior Junto projects to participate in this one. 

Step 1: In the previous project, we completed musical trios. Project 0638 was the third of three consecutive projects. In the first one, 0636, musicians recorded solos. In the second, 0637, musicians took those 0636 tracks, nudged them to the left side of the stereo spectrum, and then added their own music to the right, forming duets with a hole in the middle. In project 0638, other musicians introduced a third track down the center of the 0637 recordings, completing the trios. You can revisit the music via the blog posts disquiet.com/0636, disquiet.com/0637, and disquiet.com/0638, and you can follow the forking paths through the related Google Drive spreadsheet.

Step 2: The goal of this project is to rework the material that accumulated over the course of the previous three projects. There are two main ways to go about this. One is to simply choose one or more tracks and remix, combine, mash-up, or otherwise transform them. The other is to move on to Step 3.

Step 3: Choose one 0636 track, a solo, that yielded more than one 0638 trio. (These are highlighted in light blue, or maybe it’s teal, in the leftmost column of the spreadsheet.) Trace the various routes that 0636 track took. Perhaps it also led to some 0637 tracks that never went any further. Perhaps there are various 0638 versions based on varied 0637 versions. In any case, record a piece of music that lets the listener experience the branching paths that the original 0636 track took over the course of the three phases of the trios project sequence. Clearly you will be using a linear form to represent variations, so how to achieve that will require some compositional ingenuity. But you’re up to the task.

Tasks Upon Completion:

Label: Include either “disquiet0639remix” (if you follow the loose rules of Step 2) or “disquiet0639paths” (if you follow the more specific rules of Step 3) in the name of your track.

Upload: Post your track to a public account (SoundCloud preferred but by no means required). It’s best to focus on one track (but if you post more than one, clarify which is the “main” rendition).

Share: Post your track and a description explanation at https://llllllll.co/t/disquiet-junto-project-0639-path-ways-4-of-3/

Discuss: Listen to and comment on the other tracks.

Additional Details:

Length: The length is entirely up to you.

Deadline: Monday, April 1, 2024, 11:59pm (that is: just before midnight) wherever you are.

About: https://disquiet.com/junto/

Newsletter: https://juntoletter.disquiet.com/

License: It’s required for this sequence of projects to set your track as downloadable and allowing for attributed remixing (i.e., an attribution Creative Commons license).

Please Include When Posting Your Track:

More on the 639th weekly Disquiet Junto project, Path Ways (4 of 3) — The Assignment: Rework musical trios using the constituent parts — one of two ways — at https://disquiet.com/0639/

Slashdot: A Faster Spinning Earth May Cause Timekeepers To Subtract a Second From World Clocks

According to a new study published in the journal Nature, timekeepers may have to consider subtracting a second from our clocks around 2029 because the planet is rotating faster than it used to. The Associated Press reports: "This is an unprecedented situation and a big deal," said study lead author Duncan Agnew, a geophysicist at the Scripps Institution of Oceanography at the University of California, San Diego. "It's not a huge change in the Earth's rotation that's going to lead to some catastrophe or anything, but it is something notable. It's yet another indication that we're in a very unusual time." Ice melting at both of Earth's poles has been counteracting the planet's burst of speed and is likely to have delayed this global second of reckoning by about three years, Agnew said. "We are headed toward a negative leap second," said Dennis McCarthy, retired director of time for the U.S. Naval Observatory who wasn't part of the study. "It's a matter of when." It's a complicated situation that involves, physics, global power politics, climate change, technology and two types of time. [...] McCarthy said the trend toward needing a negative leap second is clear, but he thinks it's more to do with the Earth becoming more round from geologic shifts from the end of the last ice age. Three other outside scientists said Agnew's study makes sense, calling his evidence compelling. But Levine doesn't think a negative leap second will really be needed. He said the overall slowing trend from tides has been around for centuries and continues, but the shorter trends in Earth's core come and go. "This is not a process where the past is a good prediction of the future," Levine said. "Anyone who makes a long-term prediction on the future is on very, very shaky ground."

Read more of this story at Slashdot.

Recent additions: http2-tls 0.2.7

Added by KazuYamamoto, 2024-03-28T05:01:36Z.

Library for HTTP/2 over TLS

Hackaday: Webserver Runs on Android Phone

Android, the popular mobile phone OS, is essentially just Linux with a nice user interface layer covering it all up. In theory, it should be able to do anything a normal computer running Linux could do. And, since most web servers in the world are running Linux, [PelleMannen] figured his Android phone could run a web server just as well as any other Linux machine and built this webpage that’s currently running on a smartphone, with an additional Reddit post for a little more discussion.

The phone uses Termux (which we’ve written about briefly before) to get to a Bash shell on the Android system. Before that happens, though, some setup needs to take place largely involving installing F-Droid through which Termux can be installed. From there the standard SSH and Apache servers can be installed as if the phone were running a normal Linux The rest of the installation involves tricking the phone into thinking it’s a full-fledged computer including a number of considerations to keep the phone from halting execution when the screen locks and other phone-specific issues.

With everything up and running, [PelleMannen] reports that it runs surprisingly well with the small ARM system outputting almost no heat. Since the project page is being hosted on this phone we can’t guarantee that the link above works, though, and it might get a few too many requests to stay online. We wish it were a little easier to get our pocket-sized computers to behave in similar ways to our regular laptops and PCs (even if they don’t have quite the same amount of power) but if you’re dead-set on repurposing an old phone we’ve also seen them used to great effect in place of a Raspberry Pi.

Open Culture: Get Unlimited Access to Courses & Certificates: Coursera Is Offering $100 Off of Coursera Plus Until March 31

A heads up on a deal: Between now and March 31, 2024, Coursera is offering a $100 discount on its annual subscription plan called “Coursera Plus.” Normally priced at $399, Coursera Plus (now available for $299) gives you access to 7,000+ world-class courses for one all-inclusive subscription price. This includes Coursera’s Specializations and Professional Certificates, all of which are taught by top instructors from leading universities and companies (e.g. Yale, Duke, Google, Meta, and more).

The $299 annual fee–which translates to 81 cents per day–could be a good investment for anyone interested in learning new subjects and skills, or earning certificates that can be added to your resume. Just as Netflix’s streaming service gives you access to unlimited movies, Coursera Plus gives you access to unlimited courses and certificates. It’s basically an all-you-can-eat deal. Explore the offer (before March 31, 2024) here.

Note: Open Culture has a partnership with Coursera. If readers enroll in certain Coursera courses and programs, it helps support Open Culture.

Recent additions: tls 2.0.2

Added by KazuYamamoto, 2024-03-28T04:03:13Z.

TLS protocol native implementation

Slashdot: Oregon Governor Signs Nation's First Right-To-Repair Bill That Bans Parts Pairing

An anonymous reader quotes a report from Ars Technica: Oregon Governor Tina Kotek today signed the state's Right to Repair Act, which will push manufacturers to provide more repair options for their products than any other state so far. The law, like those passed in New York, California, and Minnesota, will require many manufacturers to provide the same parts, tools, and documentation to individuals and repair shops that they provide to their own repair teams. But Oregon's bill goes further, preventing companies from implementing schemes that require parts to be verified through encrypted software checks before they will function. Known as parts pairing or serialization, Oregon's bill, SB 1596, is the first in the nation to target that practice. Oregon State Senator Janeen Sollman (D) and Representative Courtney Neron (D) sponsored and pushed the bill in the state senate and legislature. Oregon's bill isn't stronger in every regard. For one, there is no set number of years for a manufacturer to support a device with repair support. Parts pairing is prohibited only on devices sold in 2025 and later. And there are carve-outs for certain kinds of electronics and devices, including video game consoles, medical devices, HVAC systems, motor vehicles, and -- as with other states -- "electric toothbrushes." "By eliminating manufacturer restrictions, the Right to Repair will make it easier for Oregonians to keep their personal electronics running," said Charlie Fisher, director of Oregon's chapter of the Public Interest Research Group (PIRG), in a statement. "That will conserve precious natural resources and prevent waste. It's a refreshing alternative to a 'throwaway' system that treats everything as disposable."

Read more of this story at Slashdot.

ScreenAnarchy: RESIDENT ALIEN S3 E7 Review: Surprise, Surprise, Surprise

Alan Tudyk, Sara Tomko, Corey Reynolds, Alice Wetterlund, Levi Fiehler, Elizabeth Bowen, and Judah Prehn star in the sci-fi comedy series, airing on SYFY and streaming the next day on Peacock.

[Read the whole post on screenanarchy.com...]

MetaFilter: Shani Mott, Black Studies Scholar, Dies at 47

Her work looked at how race and power are experienced in America. In 2022, she filed a lawsuit saying that the appraisal of her home was undervalued because of bias.

""She burned through two oxygen tanks and was in a wheelchair the entire time," Dr. Connolly said. "And her ability to speak forcefully and to be direct and, frankly, to be so crystal clear about how real estate works and, in particular, instruments within the structure of a mortgage transaction, it was a master class."" Home Appraised With a Black Owner: $472,000. With a White Owner: $750,000. Lawsuit Alleging Racial Bias in Home Appraisals Is Settled Nathan Connolly and the estate of Shani Mott, who recently died, will receive a payment from their mortgage lender, which also agreed to several policy changes to discourage discrimination. Prof. Mott's Storytime channel

Hackaday: Retrotechtacular: TOPS Runs the 1970s British Railroad

How do you make the trains run on time? British Rail adopted TOPS, a computer system born of IBM’s SAGE defense project, along with work from Standford and Southern Pacific Railroad. Before TOPS, running the railroad took paper. Lots of paper, ranging from a train’s history, assignments, and all the other bits of data required to keep the trains moving. TOPS kept this data in real-time on computer screens all across the system. While British Rail wasn’t the only company to deploy TOPS, they were certainly proud of it and produced the video you can see below about how the system worked.

There are a lot of pictures of old big iron and the narrator says it has an “immense storage capacity.”  The actual computers in question were a pair of IBM System/370 mainframes that each had 4 MB of RAM. There were also banks of 3330 disk drives that used removable disk packs of — gasp — between 100 and 200 MB per pack.

As primitive and large as those disk drives were, they pioneered many familiar-sounding technologies. For example, they used voice coils, servo tracking, MFM encoding, and error-correcting encoding.

The software was written in BAL, the IBM assembly language, although there were a set of macros called TOPSTRAN to make it slightly easier. Originally, each depot was going to get an IBM card reader and punch machine, but these proved to be unreliable in the rugged environment. Instead, each depot had an emulated card reader and punch using a Datapoint 2200 — the famous computer that didn’t use the Intel 8008, but that CPU was made for use in this computer.

In the video, you can see some Datapoint 2200s and card readers in use back at the data center. They even take the cover off one of the Datapoints around the 3-minute mark. The machines had 12K of RAM (on three circuit cards) and two tape drives. Around the 24-minute mark you get a look at a 600 baud, although the railroad apparently only used 200 baud for reliability. They also show a 2,400 baud modem that, we are pretty sure, had to be tuned before use.

The video can’t seem to decide if it is for general audiences or technical people. For example, it describes the tones from the modem and shows block diagrams of many of the systems. There are even some fake oscilloscope traces of modem outputs.

As far as we can tell, some of TOPS is still in use today. We hope some of it has been modernized, though. If you like 1970s mainframes, we’ll go ahead and waste the rest of your day. No kidding. The video doesn’t embed, but you can play it by clicking the picture below.

 

 

Slashdot: Why the US Could Be On the Cusp of a Productivity Boom

Neil Irwin reports via Axios: The dearth of productivity growth over the last couple of decades has held back incomes in the U.S. and other rich countries, according to a report out Wednesday from the McKinsey Global Institute, the research arm of the global consultancy. Productivity growth has been weak in the U.S. and Western Europe since the 2008 global financial crisis, but things looked better among many emerging markets. The McKinsey report finds that global labor productivity growth was 2.3% a year from 1997 to 2022, a rapid rate that has increased incomes and quality of life in large parts of the world. China and India account for the largest portion of that surge -- half of overall global productivity improvement, with other emerging markets accounting for another 25%, led by Central and Eastern Europe and emerging Asian economies. In the U.S., the report finds that the decline in capital investment following the 2008 financial crisis has resulted in a $4,500 lower per-capita GDP in 2022 than it would have if pre-crisis trends had continued. Rapid advances in manufacturing technology, especially for electronics, petered out in the same time period, subtracting another $5,000 from per-capita GDP. "Digitization was much discussed as the main candidate to rev up productivity again, but its impact failed to spread beyond" the tech sector, the authors write. The authors are optimistic that a confluence of factors will make the years ahead different. The rise in global interest rates and inflation are evidence of stronger global demand. Many countries are experiencing labor shortages that may incentivize more productivity-enhancing investment. And artificial intelligence and related technologies create big opportunities. "Inflationary pressure and rising interest rates could be signs that we are leaving behind secular stagnation and entering an era of higher demand and investment," the report finds. "In corporate boardrooms around the world right now, there's a tremendous amount of conversation associated with [generative] AI, and I think there's a broad acknowledgment that this could very much transform productivity at the company level," Olivia White, a McKinsey senior partner and co-author of the report, tells Axios. "Another thing that's happening right now is the conversation about labor. Labor markets in all advanced economies, and the U.S. is really sort of top of the heap, are very, very tight right now. So there's a lot of conversation around what do we do to make the people that we have as productive as they can be?"

Read more of this story at Slashdot.

Slashdot: Amazon Fined In Poland For Dark Pattern Design Tricks

Poland has fined Amazon close to $8 million for misleading consumers about the conclusion of sales contracts on its online marketplace. The sanction "also calls out the e-commerce giant for deceptive design elements which may inject a false sense of urgency into the purchasing process and mislead shoppers about elements like product availability and delivery dates," reports TechCrunch. From the report: The country's consumer and competition watchdog, the UOKiK, has been looking into complaints about Amazon's sales practices since September 2021, following complaints from shoppers, including some who did not receive their purchases. The authority opened a formal investigation into Amazon's practices in February 2023. Wednesday's sanction is the conclusion of that probe. The UOKiK found consumers who ordered products on Amazon could have their purchases subsequently cancelled by the tech giant as it does not treat the moment of purchase as the conclusion of a sales contract, despite sending consumers confirmation of their order -- even after consumers have paid for the product. For Amazon, the conclusion of a sales contract only occurs once it has sent information about the actual shipment. [...] Its enforcement also calls out Amazon for using deceptive design to encourage shoppers to click buy by presenting misleading information about product availability and delivery windows -- such as by listing how many items were in stock to be purchased and providing a countdown clock to order an item in order to get it on a particular delivery date. Its investigation found Amazon does not always meet these deadlines for orders, nor ship products immediately as they may be out of stock despite claims to the contrary shown to consumers. "Amazon treats the data it provides on availability and shipping date as indicative but the way it is presented does not indicate this," the UOKiK noted, adding: "Consumers can only find out about this in the terms of sale on the platform." While Amazon does offer a delivery guarantee -- offering a refund if items do not ship within the stated time -- the authority found it failed to provide consumers with information about the rules of this service before placing an order. It only offers details at the order summary stage. And then only "if the consumer decides to read the subsequent links specifying delivery details." Shoppers who did not follow the link to read more may not have been aware of their right to apply for and receive a refund from Amazon if there is a delay in shipment. It also found the e-commerce giant failed to provide information about the "Delivery Guarantee" in the purchase confirmation sent to shoppers. Amazon said it will appeal the fine. The company also writes: "Fast and reliable delivery across a wide selection of products is a top priority for us, and Amazon.pl has millions of items available with fast and free Prime delivery. Since launching Amazon.pl in 2021, we have continuously invested and worked hard to provide customers with a clear, reliable delivery promise at check out, and while the vast majority of our deliveries arrive on time, customers can contact us in the rare event that they experience a delay or order cancellation, and we will make it right. Over the last year, we have collaborated with the Office of Competition and Consumer Protection (UOKiK), and proposed multiple voluntary amendments to continue to improve the customer experience on Amazon.pl. We strictly follow legal standards in all countries where we operate and we strongly disagree with the assessment and penalty issued by the UOKiK. We will appeal this decision."

Read more of this story at Slashdot.

Hackaday: FLOSS Weekly Episode 776: Dnsmasq, Making the Internet Work Since 1999

This week Jonathan Bennett and Simon Phipps sit down with Simon Kelley to talk about Dnsmasq! That’s a piece of software that was first built to get a laptop online over LapLink, and now runs on most of the world’s routers and phones. How did we get here, and what does the future of Dnsmasq look like? For now, Dnsmasq has a bus factor of one, which is a bit alarming, given how important it is to keeping all of us online. But the beauty of the project being available under the GPL is that if Simon Kelley walks away, Google, OpenWRT, and other users can fork and continue maintenance as needed. Give the episode a listen to learn more about Dnsmasq, how it’s tied to the Human Genome Project, and more!


Did you know you can watch the live recording of the show right in the Hackaday Discord? Have someone you’d like use to interview? Let us know, or contact the guest and have them contact us! Next week we’re chatting with Joshua Colp about dnsmasq.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.

Penny Arcade: Carnage

You might have read what we did, which is that Jeffrey Lin, of Fast And/Or Furious fame, is floating around as the director for the next Spider-Man movie. As one of the few manifestations of Marvel-type shit that actually still works - I have what I believe are an "earned" set of expectations for Deadpool & Wolverine - it's one of a very small number of movies that I would prioritize seeing in a theater. I don't know if that is still completely fucked up as an industry; the only movies I followed closely were movies in this particular genre, and they weren't performing for reasons that weren't entirely to do with the plague. I only know that my own behavior around it has changed, and it's entirely possible that Gabe isn't really going back at all. If there's something that makes us wanna, it's probably worth noting when it happens.

Colossal: In Sand and Stone, Jon Foreman Sculpts Hypnotic Gradients and Organic Motifs

yellow leaves radiate outward on the forest floor

“Aureus” (2022). All images © Jon Foreman, shared with permission

Nature’s subtle irregularities and variations are fodder for Jon Foreman (previously). Using found leaves, stones, and sand, the Wales-based artist assembles swirling gradients and organic motifs that radiate across forest floors and beaches. He precisely arranges each composition by size and color, relying on basic geometric principles to transform a humble material and unconventional backdrop into stunning artworks. Considering the constructions last just a short time before they’re blown or washed away, head to Foreman’s Instagram to see them in pristine condition.

 

a circular stone gradient work on a beach

“Stone Knitting” (2024)

undulating lines of stones trail across the beache

“Pontis” (2024)

water juts up against an organic stone motif

“Aqua Exemplaria” (2024)

a swirling stone artwork on a beach

“Triplex Motus” (2023)

a white stone spiral that radiates outward on a beach

“Stella Spiralis” (2023)

branches shaped like a helix crawl up a tree with orange leaves around it

“Helix” (2024)

a radiating circular fire-like work on a beach

“Crescents Glow” (2024)

the artist sits on the beach next to a geometric stone work

“Quadratura” (2024)

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $5 per month. The article In Sand and Stone, Jon Foreman Sculpts Hypnotic Gradients and Organic Motifs appeared first on Colossal.

Disquiet: In Advance of Disquiet Junto Project 0639

This went out to juntoletter.disquiet.com subscribers just before noon Pacific on March 27:

One nice thing about having moved to Buttondown for these Disquiet Junto project emails (from TinyLetter, which was shut down just over a month ago by its parent company, Mailchimp) is that I can send more emails than I used to be able to.

Previously, sending one email each week kept the list just under the maximum that the account was capable of. I promise to not start sending out emails willy-nilly. It’s just nice to have the freedom to communicate a bit off-cycle, solely to the extent that it serves the projects and the Junto community.

All of which is a lead up to a simple request: please make sure, if you did tracks for project 0638, that they are downloadable (whether you posted to SoundCloud or to YouTube). This was stated in the 0638 instructions. I mention this here because project 0639 will build on project 0638. This recent sequence was initially planned to be a three-part project, but with the option to extend it further if it went well. Suffice to say, the past three projects went very well, indeed.

The official project instructions for 0639 will go out in about 11.5 hours, shortly after midnight California time tomorrow.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Law



Click here to go see the bonus panel!

Hovertext:
This is a complete theory of law, no exceptions, don't email me.


Today's News:

new shelton wet/dry: donna summer

The fight began when a customer threw a banana at the gas station employees, who then threw it back. The customer and staff then began throwing multiple bananas back and forth. The customer then punched one of the workers in the face. One employee then chased the customer into the parking lot and hit him several times in the head with a PVC pipe.

Parents file $1.5M lawsuit after Quebec teacher accused of selling students’ artwork online

The solar eclipse will likely lead to a spike in fatal car crashes […] 31% more fatal car crashes than on a usual day […] It’s during the hours immediately before when people are rushing to the site of observation and the hours after when they hurry to get back home that these tragic accidents can happen.

couples who are concordant in their drinking behavior (that is, both members drink alcohol) tend to live longer.

two nights of sleep restriction (4 h in bed per night) made people feel 4.44 years older compared to sleep saturation (9 h in bed per night) Additionally, moving from feeling extremely alert to feeling extremely sleepy was associated with feeling 10 years older

Scientists rename human genes to stop Microsoft Excel from misreading them as dates

Deepfakes are spreading, putting creator and brand safety at risk

Court filings unsealed last week allege Meta created an internal effort to spy on Snapchat in a secret initiative called “Project Ghostbusters.” Meta did so through Onavo, a Virtual Private Network (VPN) service the company offered between 2016 and 2019 that, ultimately, wasn’t private at all.

He said Trump Media is likely worth somewhere around $2 a share — nowhere near its closing stock price of $58. […] Trump Media generated just $3.4 million of revenue through the first nine months of last year, according to filings. The company lost $49 million over that span. And yet the market is valuing Trump Media at approximately $11 billion. For context, Reddit was only valued at $6.4 billion at its IPO last week — even though it generated 160 times more revenue than Trump Media.

Polar ice is melting and changing Earth’s rotation. It’s messing with time itself. — The hours and minutes that dictate our days are determined by Earth’s rotation. But that rotation is not constant; it can change ever so slightly, depending on what’s happening on Earth’s surface and in its molten core. These nearly imperceptible changes occasionally mean the world’s clocks need to be adjusted by a “leap second,” which may sound tiny but can have a big impact on computing systems. Plenty of seconds have been added over the years. But after a long trend of slowing, the Earth’s rotation is now speeding up. For the first time ever, a second will need to be taken off. More: UTC as now defined will require a negative discontinuity by 2029

Researchers Show that Tardigrade Proteins Can Slow Metabolism in Human Cells — Measuring less than half a millimeter long, tardigrades — also known as water bears — can survive being completely dried out; being frozen to just above absolute zero (about minus 458 degrees Fahrenheit, when all molecular motion stops); heated to more than 300 degrees Fahrenheit; irradiated several thousand times beyond what a human could withstand; and even survive the vacuum of outer space. They survive by entering a state of suspended animation called biostasis, using proteins that form gels inside of cells and slow down life processes.

Reversal of biological clock restores vision in old mice

People bought 43 million vinyl records last year, according to the Recording Industry Association of America (RIAA). That’s 6 million more than the number of CDs sold in 2023

donna summer billboards in the 1970s

ScreenAnarchy: EASTER BLOODY EASTER Clip: Horror Comedy Available Now on VOD!

A woman must protect her small town from the Jackalope and his army of devilish bunnies as they embark on a murder spree over the Easter weekend.

[Read the whole post on screenanarchy.com...]

ScreenAnarchy: ASPHALT CITY Review: Raw Intensity, Brutal Stress, Overwhelmed Paramedics

Sean Penn and Tye Sheridan star in Jean-Stéphane Sauvaire's intense thriller.

[Read the whole post on screenanarchy.com...]

ScreenAnarchy: KINDS OF KINDNESS Teaser: Yorgos Lanthimos' Next Film Coming This June

Yorgos Lanthimos is back from awards season with Kinds of Kindness. The upcoming anthology is from the director of such faves as Poor Things, The Favourite, The Killing of a Sacred Deer, The Lobster, etc etc etc. He's made a lot of faves, basically.  The first teaser came out today. As a teaser should it doesn't give away much. The offcial synopsis reads as such, Kinds of Kindness is a triptych fable, following a man without choice who tries to take control of his own life; a policeman who is alarmed that his wife who was missing-at-sea has returned and seems a different person; and a woman determined to find a specific someone with a special ability, who is destined to become a prodigious spiritual leader. The...

[Read the whole post on screenanarchy.com...]

Colossal: Peter Frederiksen Dramatizes the Dark Humor of Classic Cartoons in His Cropped Embroideries

an embroidery of a file cabinet drawer pulled out ridiculously far

“The days keep getting longer.” All images © Peter Frederiksen, shared with permission

Chicago-based artist Peter Frederiksen (previously) pinpoints the most ridiculous, exaggerated moments in cartoons and animated shows to dramatize them further into absurdity. Cropping a single outlandish action or event, Frederiksen uses free-motion machine embroidery to stitch stylized compositions that, out of context, emphasize their dark humor.

Recent works include a Looney Tunes-style mishmash of feet and fists that burst through a bulging door in “Some locks won’t hold” and the tongue-in-cheek archery challenge of “Going easy on myself.” Often focusing on escalated tensions, the embroideries accentuate moments of high anxiety in a nostalgic, comforting childhood medium.

Frederiksen has started to switch to digital jacquard weavings for larger pieces. The base becomes a guide for his stitches and provides a colorful backing, which allows for less dense compositions. He’s also incorporated more unwieldy crops, including in works like “The days keep getting longer,” portraying a preposterously elongated filing cabinet.

In April, Frederiksen will open a solo show at Steve Turner Gallery in Los Angeles, along with a dual show in June at UNION Gallery in London. He plans to release a limited-edition print with All Star Press on April 25 and has a candle collaboration coming this spring with Varyer. Follow his latest works and chances to attend one of his workshops in Chicago on Instagram.

 

an embroidery of a hand holding a hot dog with mustard and additional links still connected to the ends

“Start in the middle and work back”

an embroidery of a cartoon character pushing against a bulging door with hands and feet poking through the sides

“Some locks won’t hold”

an embroidery of an axe chopping a tree that's barely standing

“Closer with every cut”

a hand holds a bow to shoot at a very close target

“Going easy on myself”

a light shines on a wooden chair with a dollar on it that's tied to a string

“Interrogation of desire”

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $5 per month. The article Peter Frederiksen Dramatizes the Dark Humor of Classic Cartoons in His Cropped Embroideries appeared first on Colossal.

ScreenAnarchy: RENEGADE NELL Review: All About the Avatars

Sally Wainwright creates a new action-comedy-fantasy adventure series about the highwaymen era in England, starring Louisa Harland and Adrian Lester, debuting on Disney Plus.

[Read the whole post on screenanarchy.com...]

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: A chilling nation

Looks like the writing’s on the wall. It says, Help!

Consider this: during Covid almost 900,000 Canadian businesses were handed $49.2 billion in loans of up to $60,000. What a sweetheart deal. No interest. No payments. And if they partially repaid the debt by early 2024, then got to keep a third of it – $20,000. Free money. All they had to do was take the cash, sit on it for a while, repay a hunk and pocket the rest.

Alas, the original deadline was extended because hundreds of thousands couldn’t pay. Now we learn over 200,000 borrowers took out fresh commercial loans to repay Ottawa so they could keep the twenty grand. The loan money went poof. Business was marginal.

Conclusions: when the government gives people money it never ends well. Second, Canadian small business is sick.

And did you see the comment posted this week from blog dog Tom, in Mississauga? Chilling.

Our company’s blind and shutter dealer in Woodstock went from record sales in Q123 to worst sales in Q124 year since opening in 1995.

The factory beside our place in Oakville closed production 2 weeks ago.

My company is abandoning our office space on North Service Road in Oakville at the end of next month and did 2 more permanent layoffs last week.

The goalie on my Tuesday hockey team, apparel wholesale, had his worst Jan and Feb in 26 years.

I’m going with the anecdotal evidence, that Ontario is already in recession, that there were more than 100,000 private sector jobs lost in March.

Hmmm. Not good. But consistent with a lot of things we know about our nation at the moment.

We are the most indebted country in the G7 when it comes to households. More than $200 billion in mortgage loans are coming up for renewal in the next few months, causing further distress. Our economy is barely growing – expanding at a meagre 1%, just a third of the growth Americans are experiencing. Every sector of society is depressed and cash-strapped. Unemployment ticks relentlessly higher. The Ontario government yesterday announced a fat new deficit. The feds are running more red ink in the current fiscal year as businesses hurt and corporate taxes tank. Festivals and long-running events, like Just for Laughs, are going bankrupt amid unrepayable debts. Transit systems are hurting. The Toronto school board is $20 million a year in the hole. That city has a deficit of $1.5 billion. The Canadian banks have set aside huge whacks of money to cover potential bad loans. On and on it goes.

So the evidence abounds. The economy is cooling faster than Trudeau poll numbers. Higher interest rates have, as it turns out, been crushing. Consumer spending has dropped, and the cost of living along with it. Our latest CPI was 2.8%. The month before was 2.9%. So inflation here is coming off, along with the GDP, just as both in America heat up.

Clearly Canada is more sensitive to higher rates than the US because we’ve done a fine job of pickling ourselves in debt. Look at the latest OSFI ruling (last Friday) which seeks to corral loans to people with debts exceeding 450% of their incomes. And there are a helluva lot of them.

What does this mean?

In a word, the CB won. Ten interest rate increases taking the Bank of Canada policy marker from one quarter of one per cent all the way to five – a 1,900% jump – were the meds needed to turn 8% inflation into something with a 2-handle. It also sank economic growth and is now sucking off private and public money in alarming debt service payments. Almost $40 billion in tax money is being burned in servicing the federal debt alone. And we’re still increasing it. Meanwhile the greatest portion of the CPI is now shelter costs – mostly mortgage interest – which Tiff Macklem directly controls.

Low growth. Falling inflation. Rising interest charges. Job losses. Small businesses unable to pay gentle loans. Plant closures. Housing starts on the decline. Rising mortgage delinquencies. Growing bank loan loss provisions.

If this continues economists know what the next step will be. The R-word. Canada risks slipping into recession while the US – despite its political clown show and global entanglements – has robust growth, full employment and record markets.

This makes interest rate cuts in 2024 inevitable. It also builds the case that we move before the Fed does. Delay could bring misery to many.

“We have long been of the view that the BoC will move ahead of the Fed,” says BMO’s chief economist Doug Porter. And so the odds of a cut on April 10th have gone from about zero to 20%. The chances of a drop at the setting after that (June 5th) are now 70%.

Will our guys do the right thing and save our indebted tails?

You should hope so.

About the picture: “Thanks from Victoria,” writes Rob. “I’ve enjoyed your insight since I first discovered your newspaper column 20+ years ago.  I”m sending you a clipping that was lost in my files, kept to remind me why I did what I did. Also, why math becomes more instructive when more variables and opportunity costs are included. This is our Beast, Urso.  I get this attitude from him when he figures I’ve stared too long at the talking glowing rectangles, and should switch to play.  My wife likes him more than me, but I suspect its because he doesn’t talk as much.”

To be in touch or send a picture of your beast, email to ‘garth@garth.ca’.

 

CreativeApplications.Net: Open call for half scholarships – Master in Design for Responsible AI by IAM and Elisava

Open call for one of the partial scholarships to cover 50% of the tuition fees as part of the Elisava Masters' Scholarships 2024.

Submitted by: mdrai
Category: Member Submissions
Tags: / / / / /
People: /

CreativeApplications.Net (CAN) is a community supported website. If you enjoy content on CAN, please support us by Becoming a Member. Thank you!

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: “What the Water Gives Me” by Artist Claudia Koh

Claudia Koh

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Claudia Koh’s Website

Claudia Koh on Instagram

Ideas: Conflicted: a Ukrainian journalist covers her nation at war

“We face a continual tension between holding the government to account, and not wanting the enemy to undermine us by exploiting bad news," says Ukrainian journalist Veronika Melkozerova. She delivered this year's Peter Stursberg Foreign Correspondents Lecture, focusing her talk on what Ukrainian journalists confront daily: patriotism versus journalism.

Open Culture: Hear the Evolution of Mozart’s Music, Composed from Ages 5 to 35

More than a quarter of a millennium after he composed his first pieces of music, different listeners will evaluate differently the specific nature of Wolfgang Amadeus Mozart’s genius. But one can hardly fail to be impressed by the fact that he wrote those works when he was five years old (or, as some scholars have it, four years old). It’s not unknown, even today, for precocious, musically inclined children of that age to sit down and put together simple melodies, or even reasonably complete songs. But how many of them can write something like Mozart’s “Minuet in G Major”?

The video above, which traces the evolution of Mozart’s music, begins with that piece — naturally enough, since it’s his earliest known work, and thus honored with the Köchel catalogue number of KV 1. Thereafter we hear music composed by Mozart at various ages of childhood, youth, adolescence, and adulthood, accompanied by a piano roll graphic that illustrates its increasing complexity.

And as with complexity, so with familiarity: even listeners who know little of Mozart’s work will sense the emergence of a distinctive style, and even those who’ve barely heard of Mozart will recognize “Piano Sonata No. 16 in C major” when it comes on.

Mozart composed that piece when he was 32 years old. It’s also known as the “Sonata facile” or “Sonata semplice,” despite its distinct lack of easiness for novice (or even intermediate) piano players. It’s now cataloged as KV 545, which puts it toward the end of Mozart’s oeuvre, and indeed his life. Three years later, the evolutionary listening journey of this video arrives at the “Requiem in D minor,” which we’ve previously featured here on Open Culture for its extensive cinematic use to evoke evil, loneliness, desperation, and reckoning. The piece, KV 626, contains Mozart’s last notes; the unanswerable but nevertheless irresistible question remains of whether they’re somehow implied in his first ones.

Related content:

Hear All of Mozart in a Free 127-Hour Playlist

Hear the Pieces Mozart Composed When He Was Only Five Years Old

Read an 18th-Century Eyewitness Account of 8‑Year-Old Mozart’s Extraordinary Musical Skills

Mozart’s Diary Where He Composed His Final Masterpieces Is Now Digitized and Available Online

What Movies Teach Us About Mozart: Exploring the Cinematic Uses of His Famous Lacrimosa

See Mozart Played on Mozart’s Own Fortepiano, the Instrument That Most Authentically Captures the Sound of His Music

Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.

Open Culture: Radiohead’s “Creep” Sung by a 1,600-Person Choir in Australia

Everybody can sing. Maybe not well. But why should that stop you? That’s the basic philosophy of Pub  Choir, an organization based in Brisbane, Australia. At each Pub Choir event, a conductor “arranges a popular song and teaches it to the audience in three-part harmony.” Then, the evening culminates with a performance that gets filmed and shared on social media. Anyone (18+) is welcome to attend.

Above, you can watch a PubChoir performance, with 1600 choir members singing a moving version of Radiohead’s “Creep.” On their YouTube channel, you can also find Pub Choir performances of Coldplay’s “Yellow,” Toto’s “Africa,” and The Bee Gees “How Deep Is Your Love.”

Find other choir performances in the Relateds below.

via Kottke

Related Content

A Big Choir Sings Patti Smith’s “Because the Night”

A Choir with 1,000 Singers Pays Tribute to Sinéad O’Connor & Performs “Nothing Compares 2 U”

Watch David Byrne Lead a Massive Choir in Singing David Bowie’s “Heroes”

Patti Smith Sings “People Have the Power” with a Choir of 250 Fellow Singers

Penny Arcade: Carnage

New Comic: Carnage

Colossal: In His Ongoing ‘Descendants’ Series, Drew Gardner Recreates Striking Portraits of Black Civil War Soldiers

A side-by-side image of two black-and-white photographic portraits. On the left, Civil War solder Richard Oliver sits in uniform. On the right, his descendant Jay Miller is seated in a recreation of the original portrait.

Left: Richard Oliver. Right: Jay Miller, descendant of Oliver. All images courtesy of Drew Gardner, shared with permission

The idea for Drew Gardner’s series Descendants emerged from a simple observation by his mother: she noticed that Gardner resembled his grandfather. Intrigued by how traits are passed down—not just as physical likeness but the elemental foundations of DNA—he began researching and documenting the lineages of historical figures. In the nearly two decades since the project started, Gardner has met and photographed relatives of notable people like Charles Dickens, Berthe Morisot, Napoleon, Geronimo, and Frederick Douglass.

A few years into the series, something increasingly bothered him: most of his subjects were white. Reflective of the historical erasure of people of color from Western history books, archives, and art collections, the subjects whose descendants Gardner tracked down were largely European and famous. But he knew there was more to the story.

In 2020, the U.K.-based photographer collaborated with Smithsonian Magazine to produce a U.S. installment of portraits, which added a layer of nuance to his work: “Ordinary people have paid an incredible price for where we are today with our democracy, with the nations we live in,” he tells Colossal. At a time when significant Black historical sites face an increased risk of loss as they’re abandoned or forgotten, capturing history on film is a powerful and effective way to preserve it.

When Gardner visited the U.S. in February of this year, he was puzzled that he noticed few indications that it was Black History Month. “I honestly wouldn’t have known, and I am quite a bit of a media beast,” Gardner says. “I don’t think I saw a single mention of Black History Month.” Observations like these only reinforce the importance of highlighting the contributions of Black people and people of color throughout history, and Gardner felt compelled to focus on both influential and little-known people whose actions have significantly shaped culture and politics. 

 

A side-by-side image of two black-and-white photographic portraits. On the left, Harriet Tubman sits on a chair. On the right, her descendant, Deanne Stanford Walz, sits in a recreated portrait.

Left: Harriet Tubman. Right: Deanne Stanford Walz, descendant of Tubman

Gardner dove into The Black Civil War Soldier by acclaimed photographer Deborah Willis, containing more than 70 images, many of which are rarely reproduced and several of which feature unnamed soldiers. Through additional research, Gardner was able to compile a list of 120 portraits in which the subjects were named. He’s found more in the meantime, now exceeding 200, yet that number is still remarkably small within the broader context of Civil War portraiture.

Genealogical researcher Ottawa Goodman collaborated with Gardner to narrow down about 25 of those portraits to begin tracing relatives, relying on the WikiTree U.S. Black Heritage Project for help. Of those 25, the pair only managed to connect with descendants of about six original sitters. Working with designers who assisted with set design, costumes, props, and styling, Gardner recreated and captured the spirit of the original photographs, from the iconic seated portrait of Harriet Tubman to the poignant image of a young drummer named David Miles Moore.

He’s sensitive to the fact that the photographs are not only treasured family heirlooms but integral and emotional elements of family heritage. Sometimes, living descendants are enthusiastic to learn more about the project and participate; other times, he doesn’t receive a response.

The American Civil War, which in many ways marked the dawn of photography, provides a deep well for Gardner’s research, revealing countless untold stories. This is mainstream photography for, not quite the masses, but getting there. It’s one of the first wars that photography played a major part,” he says. 

Find much more of the Descendants series on Gardner’s website and Instagram.

 

A side-by-side image of two black-and-white photographic portraits. On the left, Civil War solder Andrew Jackson Smith stands in uniform. On the right, his descendant Kewsi Bowman poses in a recreation of the original portrait.

Left: Andrew Jackson Smith. Right: Kewsi Bowman, a descendant of Smith

A side-by-side image of two black-and-white photographic portraits. On the left, Civil War solder Lewis Douglass sits in uniform. On the right, his descendant Austin Morris is seated in a recreation of the original portrait.

Left: Lewis Douglass, son of Frederick Douglass. Right: Austin Morris, a descendant of Frederick Douglass

A side-by-side image of two black-and-white photographic portraits. On the left, young Civil War drummer David Miles Moore stands in uniform. On the right, his descendant Neikoye Flowers stands in a recreation of the original portrait.

Left: David Miles Moore. Right: Neikoye Flowers, a direct descendant of Moore

A side-by-side image of two black-and-white photographic portraits. On the left, Civil War solder Louis Troutman stands in uniform. On the right, his descendant Chris Wilson stands in a recreation of the original portrait.

Left: Louis Troutman. Right: Chris Wilson, a descendant of Troutman

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $5 per month. The article In His Ongoing ‘Descendants’ Series, Drew Gardner Recreates Striking Portraits of Black Civil War Soldiers appeared first on Colossal.

Disquiet: Day … Groundhog Day … Groundhog

I’m excited to have a short piece on Groundhog Day, one of my favorite movies (and, perhaps just as key, one of my favorite stories), in this series alongside some friends and writers I admire. It’ll be rolling out on hilobrow.com over the next few months.

Check out the announcement post.

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: Swapping

.
By Guest Blogger Scott Booth
.

Investment returns generally come in three forms. Interest Income, dividend income and capital gains from price appreciation. Capital gains get captured on your T-5008 and the taxation of them is deferred until the time a security that has appreciated in value is sold. Dividends and interest payments come in at regular intervals from stocks and bonds respectively and the taxes on these income streams are owed in the year they are received. For ETF holders these are captured on your T-3s. Don’t file until you have them.

The tax treatment and relative appeal of each component of investment returns varies by tax bracket. In low brackets, dividends get the most favourable treatment. As one’s taxable income increases and they have the pleasure of handing a larger percentage of every dollar earned over to the kind folks at the CRA, capital gains get taxed more favourably.

I’m based in Ontario, but let’s use BC tax rates as an example so as not to introduce a regional bias. Provincial realities will vary but general themes are fairly consistent across this great nation.

Combined Federal and BC Tax Brackets and Tax Rates 2023

Source: Taxtips.ca

If a BC resident has total income of less than $165,430, dividends are going to be the type of investment return with the most tax appeal.

For those in the top tax bracket, capital gains are the most efficient form of return. The CRA will expect you to pony up 53.50% of those incoming interest payments, 36.54% of those juicy dividends being paid out on a regular basis and just 26.75% of those realized capital gains.

Business owners of Canadian-controlled private corporations (CCPC) benefit from the Small Business Deduction, a program that imposes a lower corporate tax rate on small business. They enjoy lower corporate tax rates of 9%-12% (depending on province) on active business income up to $500,000. As investments within a CCPC grow, passive income rules can erode or eliminate this deduction. That erosion starts when passive income exceeds $50K/year and the benefit is eliminated if that passive income figure hits $150K/year. Income from dividends and interest can be great up to a point, and then they become punitive when invested in a corporate account. In this case, capital gains will be the preferred form of investment returns.

Generally, investors should try to shelter income bearing investments in RRSPs and stick growth and dividend focused assets in taxable accounts. When portfolio sizes grow, maintaining an asset allocation can necessitate holding income producing assets in a non-registered account. Music to Mr. Tax Man’s ears.

There are products out there that investor can utilize to manage the composition of their investment returns, notably swap-based ETFs. We’re not talking about the upside-down pineapple on a cruise ship kind of swap here. Total-return swaps and the corporate class ETFs that utilize them are the vehicles that make this possible and accessible to everyday investors.

So, what it is the total return on an asset (usually an index)?

Total return is equal to the change in the price of the asset (capital gain/loss) + the distribution (dividends/distributions/coupon payments the asset makes).

If the price of the index goes up 5% and the underlying pays a 3% dividend, the total return is 8%.
Housed within a Mutual Fund Corporation, swap-based ETFs are synthetic, meaning the investor doesn’t own the underlying securities…they are just entitled to the total return they generate.

These instruments can effectively make the unfavourably taxed distribution portion of total returns disappear, as it is rolled up in the total return. This isn’t the bad tropical island/lawless banana republic, Bernie Madoff/ S.B.F. kind of disappear. Investors receive the total return on the index, less fees and this return gets captured as a capital gain, which can be deferred until the total return etf is sold.

The tax savings from utilizing these instruments have the potential to be material.
The example below assumes the investor is in the top tax bracket in BC.

Traditional Bond ETF vs Corp Class Bond ETF Comparison

The Investor in the swap-based ETF has considerably more growth annually, picking up 70 bps, almost 50% more after tax, and only pays the tax on the gain when the investment is sold. Tax savings and deferral seem to be appealing prospects.

Investing is all about risk/reward trade-offs. There are a different set of risks associated with investing in swap-based ETFs compared to traditional ones.

Counter-party risk on the swap contracts is the obvious one, but these are big six banks that are engaged in the contracts. Pretty safe and secure.

Regulatory change is another significant risk. There is always the chance of the CRA cracking down on the corporate class structure use to avoid distributions. If there’s anything the government doesn’t like, it’s not receiving tax income. Remember Income Trusts?

Income management risk is another significant consideration with swap-based ETFs. It is critically important to their tax-advantaged status the mutual fund corporation housing them does not generate net-income within the overall corporate structure, lest they face taxation at punitive rates. The mix of ETFs within a corporate class structure will influence this (having some high cost ones in there is beneficial), as will the settlement of swap contracts and non-capital loss pools that can be used to offset income. Any investor considering these assets should ensure they understand of the associated risks and never invest in a product you don’t understand.

That having been said, it does appear that there are substantial benefits that can be realized, particularly for investors with unsheltered fixed income holdings in higher tax brackets and those with chunky corporate investment accounts.

Might be worth swapping.

Scott Booth, CFA, is a seasoned financial advisor and licensed portfolio manager. Over the past 18 years he has worked in the capital markets as an analyst, trader and advisor with major banks and now with Turner Investments.
.

About the picture: “Garth, we  love your blog and have been reading it daily for quite some time now,” writes Eric. “Always informative and most days we get some great laughs on top of the sage advice. Moved from Oakville to Waterdown a few years ago and luckily bought in one of the few gullies over the past 6 years. The main reason was to have more of a yard for our one dog and to add a second. Just added our third to the pack a couple months ago. Figured the way house prices are going it’s much cheaper to have 3 dogs than 3 kids since we won’t need to give them each a down payment 20 years from now for their first condo. The 3 Labradoodles – Oliver is red, Benjamin is the black and White and Isabelle is the brown new puppy ) Feel free to use the pic in your blog!”

To be in touch or send a picture of you beast, email to ‘garth@garth.ca’ 

 

Schneier on Security: Hardware Vulnerability in Apple’s M-Series Chips

It’s yet another hardware side-channel attack:

The threat resides in the chips’ data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it’s actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel’s 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years.

[…]

The breakthrough of the new research is that it exposes a previously overlooked behavior of DMPs in Apple silicon: Sometimes they confuse memory content, such as key material, with the pointer value that is used to load other data. As a result, the DMP often reads the data and attempts to treat it as an address to perform memory access. This “dereferencing” of “pointers”—meaning the reading of data and leaking it through a side channel—­is a flagrant violation of the constant-time paradigm.

[…]

The attack, which the researchers have named GoFetch, uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system. M-series chips are divided into what are known as clusters. The M1, for example, has two clusters: one containing four efficiency cores and the other four performance cores. As long as the GoFetch app and the targeted cryptography app are running on the same performance cluster—­even when on separate cores within that cluster­—GoFetch can mine enough secrets to leak a secret key.

The attack works against both classical encryption algorithms and a newer generation of encryption that has been hardened to withstand anticipated attacks from quantum computers. The GoFetch app requires less than an hour to extract a 2048-bit RSA key and a little over two hours to extract a 2048-bit Diffie-Hellman key. The attack takes 54 minutes to extract the material required to assemble a Kyber-512 key and about 10 hours for a Dilithium-2 key, not counting offline time needed to process the raw data.

The GoFetch app connects to the targeted app and feeds it inputs that it signs or decrypts. As its doing this, it extracts the app secret key that it uses to perform these cryptographic operations. This mechanism means the targeted app need not perform any cryptographic operations on its own during the collection period.

Note that exploiting the vulnerability requires running a malicious app on the target computer. So it could be worse. On the other hand, like many of these hardware side-channel attacks, it’s not possible to patch.

Slashdot thread.

Schneier on Security: Security Vulnerability in Saflok’s RFID-Based Keycard Locks

It’s pretty devastating:

Today, Ian Carroll, Lennert Wouters, and a team of other security researchers are revealing a hotel keycard hacking technique they call Unsaflok. The technique is a collection of security vulnerabilities that would allow a hacker to almost instantly open several models of Saflok-brand RFID-based keycard locks sold by the Swiss lock maker Dormakaba. The Saflok systems are installed on 3 million doors worldwide, inside 13,000 properties in 131 countries. By exploiting weaknesses in both Dormakaba’s encryption and the underlying RFID system Dormakaba uses, known as MIFARE Classic, Carroll and Wouters have demonstrated just how easily they can open a Saflok keycard lock. Their technique starts with obtaining any keycard from a target hotel—say, by booking a room there or grabbing a keycard out of a box of used ones—then reading a certain code from that card with a $300 RFID read-write device, and finally writing two keycards of their own. When they merely tap those two cards on a lock, the first rewrites a certain piece of the lock’s data, and the second opens it.

Dormakaba says that it’s been working since early last year to make hotels that use Saflok aware of their security flaws and to help them fix or replace the vulnerable locks. For many of the Saflok systems sold in the last eight years, there’s no hardware replacement necessary for each individual lock. Instead, hotels will only need to update or replace the front desk management system and have a technician carry out a relatively quick reprogramming of each lock, door by door. Wouters and Carroll say they were nonetheless told by Dormakaba that, as of this month, only 36 percent of installed Safloks have been updated. Given that the locks aren’t connected to the internet and some older locks will still need a hardware upgrade, they say the full fix will still likely take months longer to roll out, at the very least. Some older installations may take years.

If ever. My guess is that for many locks, this is a permanent vulnerability.

CreativeApplications.Net: Embedded/Embodied – Sound as a means of obtaining knowledge

Embedded/Embodied unites ‘acoustics’ and ‘epistemology’, acoustemology to investigate sound as a means of obtaining knowledge, delving into what can be known through listening.

Category: Javascript / Sound / Three Js / Unity
Tags: / / / / / / / / / / / /
People: /

CreativeApplications.Net (CAN) is a community supported website. If you enjoy content on CAN, please support us by Becoming a Member. Thank you!

Colossal: Elaborate Still Lifes Erupt with Vivid Color in Eric Wert’s Oil Paintings

A detailed still life oil painting of an overflowing vase of flowers against a teal patterned background.

“Acquiesce” (2021), oil on canvas, 72 x 60 inches. All images © Eric Wert, shared with permission

“For me, the experience of painting an object reveals just how alien and unknowable it truly is,” says Eric Wert, whose vibrant still lifes seem to glow from within. From decadent bouquets that overflow from their vases to a pair of rain-speckled magnolia branches, the subjects of the Portland, Oregon-based artist’s oil paintings are portrayed in hyperrealistic detail.

Wert draws on his background in scientific illustration, a discipline that attracted him “because of the emphasis on rigorous accuracy in representation,” he says. “Over time, I found that objective technical drawings would never convey the complex feelings experienced while observing my subjects.”

Contributing to the long history of still life in European art history, Wert’s compositions take a contemporary view of the tradition while retaining the elements that characterize the genre: composition and precision. “My oil paintings are intended to be both seductive and destructive—a highly controlled meditation on the impossibility of control,” he says. Abundant flowers spill from displays and cross sections of fruit reveal sensual textures. The backdrops also complement the central subject, often depicting ornamental textiles or wallpaper patterns.

Wert references the qualities of vanitas painting in particular, which brim with symbolism intended to remind the viewer of the worthlessness of worldly desires or pleasures within the broader context of mortality. “Conveying a recognizable image happens early on in the process,” Wert says, “but my favorite part of the painting happens days or weeks later when I stop trying to control it—when I get out of the way and let the object reveal its other self.”

Three of the artist’s paintings are currently included in the group show Still Life at Gallery Henoch in New York City, which continues through April 12. Find more on Wert’s website, where prints of some of his paintings are available for purchase in addition to a selection of puzzles and cards published by Pomegranate. Stay up to date by following the artist on Instagram.

 

A detailed still life oil painting of a bowl full of tropical fruit, set against a background of a Chinese dragon textile pattern.

“Dragon Breath” (2023), oil on canvas, 30 x 30 inches

A detailed still life oil painting magnolias on a black surface with water droplets.

“Magnolia” (2022), oil on panel, 18 x 24 inches

A detailed still life oil painting of a bird's nest made from moss on a branch against a dark violet background.

“Moss Nest” (2024), oil on panel, 20 x 16 inches

A detailed still life oil painting of a full crystal bowl of plums in various colors, set against a teal and gold background.

“Plums” (2023), oil on panel, 24 x 24 inches

A detailed still life oil painting of an arrangement of ferns and moss.

“Sottobosco” (2022), oil on canvas, 40 x 50 inches

A detailed still life oil painting of an overflowing bowl of vegetables and fruit, including cabbage, artichoke, tomato, grapes, and more.

“Still Life With Medieval Tapestry” (2016), oil on canvas, 36 x 36 inches

A vibrant still life painting of an overflowing arrangement of flowers.

“The Arrangement” (2015), oil on panel, 50 x 40 inches

Part of an elaborate oil painting of flowers, pictured with the artist's hand applying a detail with a small brush.

Detail of a work in progress

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $5 per month. The article Elaborate Still Lifes Erupt with Vivid Color in Eric Wert’s Oil Paintings appeared first on Colossal.

Michael Geist: Tweets Are Not Enough: Why Combatting Relentless Antisemitism in Canada Requires Real Leadership and Action

The Jewish holiday of Purim over the weekend sparked the usually array of political tweets featuring some odd interpretations of the meaning of the holiday and expressing varying degrees of support for the Jewish community.  But coming off one of the worst weeks in memory  – cancelled Jewish events due to security concerns, antisemitism in the mainstream media, deeply troubling comments on the floor of the House of Commons, and the marginalization of some Jewish MPs in government – the time for generic statements of support does not cut it. The Globe and Mail has noted the “dangerous slide into antisemitism” and called for a House motion unequivocally condemning antisemitism. This post provides further context to that piece, arguing that such a motion is necessary but insufficient since it is leadership and real action from our politicians, university presidents, and community groups that is desperately needed. 

The relentless antisemitism in Canada has left many in the community numb, creating a new normal that has obvious echoes of prior generations who faced pogroms, ethnic cleansing, and the Holocaust. Some point to events in Israel and Gaza to explain the antisemitic surge, yet Canadian Jews are no more responsible for the actions of the Israeli government than Canadian Muslims are to blame for last week’s ISIS terrorist attack in Russia. Since October 7th, there have been terrorism charges in Ottawa involving plans to target the Jewish community, firebombs, shots, and vandalism targeted at Jewish schools and community centres in Montreal, Toronto, and Fredericton, vandalism and threats at Jewish owned businesses, as well as protests outside synagogues, Jewish institutions, and Jewish neighbourhoods. In addition, there is the antisemitism in the cultural world including the cancellation of a Jewish film festival in Hamilton (since relocated) and plays with Jewish or Israeli themes cancelled in British Columbia. Meanwhile, Jewish politicians have been targeted with threats or pressured out of office altogether.

The situation on university campuses merits special mention. The congressional testimony in the U.S. from three presidents seemingly unable to articulate a clear position on the implications of calling for genocide of Jews captured headlines last year, but here in Canada being openly Jewish on campus carries real risk, including efforts to evict Jewish organizations from campus. Universities have policies in place designed to promote safety and inclusivity, but Jews know that outward expressions of their religion runs the risk of verbal or physical abuse and that death threats or antisemitic graffiti can be found on campus walls. Indeed, buildings carrying Jewish names, reflecting a commitment from the community to give back to these institutions, are now specifically targeted by protesters. Universities react quickly to incidents targeting other groups, but rarely for Jewish students or faculty. In contrast to other external signals of inclusivity, there are no signs on faculty doors that say “kippas welcome here” and EDI officers often don’t think of the wellbeing of Jewish students as part of their mandate. Further, the situation is little better in secondary schools, where school boards are often missing in action as Jewish teachers hide their religion and live in fear of being targeted.

This is simply the reality of being Jewish in Canada in 2024, where antisemitic incidents represent the majority of reported hate crimes in our largest cities. Going to synagogue or Jewish schools often involves a police presence and speaking about your concerns in public requires hushed tones. In work environment after work environment – doctors, public servants, labour unions, and more – one hears about a steady stream of antisemitism that has led to resignations and lawsuits. Some now choose to hide their religion in the hope of being ignored or remain silent for fear of the terrifying antisemitic backlash that speaking out invariably sparks. For a country that prides itself on rights of equality, freedom of expression, and freedom of religion, these rights and freedoms do not apply in equal measure right now for the Canadian Jewish community.

The government has made inclusivity its brand and one would have hoped that it would be vocally supportive of the Jewish community in words and deeds. Yet the silence from the majority of MPs and misleading comments from government ministers in 2022 when it was revealed that Canadian Heritage had funded an antisemite as part of its anti-hate program was a warning sign of the cowardice that exists when it comes to antisemitism. That cowardice was repeated last week when Pascale St-Onge, the new Canadian Heritage minister, was unwilling to forcefully call out an antisemitic cartoon published in a major French newspaper or when MPs avoid referencing antisemitism by relying on more generic anti-hate messages. Domestic political calculations appear to trump principle and after the murder of six million Jews in the Holocaust and a Canadian immigration policy that was once premised on “none is too many”, the Jewish community is seemingly too small today to matter to governments.

This is not an easy post to write. But after the Globe and Mail last week called for a House motion unequivocally condemning antisemitism, I felt it necessary to endorse the proposal and supplement it by arguing that supportive words alone are insufficient. The motion must be accompanied by action. That could start with ensuring that public dollars for education and cultural institutions do not go to institutions that maintain a hostile environment by failing to address antisemitism, narrowing Bill C-63 to online harms rules that hold platforms accountable for failing to abide by their own policies, providing financial support for security of Jewish schools and community institutions, promoting antisemitism education within the public service, and implementing Holocaust education in our schools. There needs to be similar motions and commitments to act from provincial and local governments, since many of the issues fall within their jurisdiction.

The story of Purim isn’t about the “triumph of inclusion, love and resilience” as one MP suggested. It is about the personal and political courage summoned by leaders such as Queen Esther to speak out and act against evil. That is the lesson for modern times as we need more of that courage today if we are to confront antisemitism in Canada.

The post Tweets Are Not Enough: Why Combatting Relentless Antisemitism in Canada Requires Real Leadership and Action appeared first on Michael Geist.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - The Rub



Click here to go see the bonus panel!

Hovertext:
Wow, uncle murdered your Dad and now you've killed your friends and family? That's a real bump in the ol' bowling alley, Hamlet.


Today's News:

Ideas: Kate Beaton: What's lost when working-class voices are not heard

Kate Beaton and her family have deep roots in hard-working, rural Cape Breton, Nova Scotia. In her 2024 Henry Kreisel Memorial Lecture, the popular cartoonist points out what is lost when working-class voices are shut out of opportunities in the worlds of arts, culture, and media.

OCaml Weekly News: OCaml Weekly News, 26 Mar 2024

  1. The Flambda2 Snippets, by OCamlPro
  2. Eio 1.0: First major release
  3. ppx_minidebug 1.3.0: toward a logging framework
  4. Academic OCaml Users Testimonials!
  5. Volunteers for ICFP 2024 Artifact Evaluation Committee (AEC)
  6. First beta release for OCaml 5.2.0
  7. Other OCaml News

Planet Lisp: Joe Marshall: With- vs. call-with-

In Common Lisp, there are a lot of macros that begin with the word “with-”. These typically wrap a body of code, and establish a context around the execution of the code.

In Scheme, they instead have a lot of functions that begin with the words “call-with-”. They typically take a thunk or receiver as an argument, and establish a context around a call to the thunk or receiver.

Both of these forms accomplish the same sort of thing: running some user supplied code within a context. The Scheme way accomplishes this without a macro, but “call-with-” functions are rarely used as arguments to higher order functions. Writing one as a function is slightly easier than writing one as a macro because the compiler takes care of avoiding variable capture. Writing one as a macro leaves it up to the implementor to use appropriate gensyms. Writing one as a macro avoids a closure and a function call, but so does inlining the function. The macro form is slightly more concise because it doesn’t have a lambda at every call site. The function form will likely be easier to debug because it will probably involve a frame on the stack.

There’s no need to commit to either. Just write a “with-” macro that expands into a call to an inline “call-with-” function. This should equally please and irritate everyone.

Disquiet: Signal Analyzer

Looking forward to some visual feedback in my synthesizer efforts

Jesse Moynihan: Forming 379

Colossal: Densely Heaving Lines Meet at Mountainous Junctures in Lee Hyun Joung’s Paintings

a dense line drawing in blue and white that meet at a central ridge

“Contemplation” (2024), 195 x 130 x 3.5 centimeters. All photos by Nick Verhaeghe, courtesy of Galerie Sept, shared with permission

In Ridge Lines, Lee Hyun Joung navigates along the roving meeting point of two adjoining bodies. The artist melds the artistic and aesthetic traditions of her native Korea with those of her adopted home in Paris, rendering intricately bisected landscapes where the two converge.

Opening next month at Galerie Sept in Brussels, Lee’s solo exhibition comprises several new paintings made with handmade Hanji paper and ink from Korean pigments and fish glue. The artist often works on the floor, drawing each thin, sweeping line in a sort of meditative trance. “Instead of flattening the paper, I let the random embossed pattern show through. I use my brush to create line patterns to emphasize or obstruct the paper’s natural relief. Through the movements of my body, I create a rhythm, without a structured plan,” she told critic Isabelle de Maison Rouge in advance of the show.

Lee’s works capture this repetitive motion as they heave toward the central crest. Her paintings have grown in complexity in recent years, expanding on the mountain-like landscapes to puncture terrains with deep, hidden valleys. The vertical “Chemins de Vie” works, for example,  follow a winding path that loops and turns back on itself to create pockets along the ridge. Likening each line to “a day in the life of a human,” the artist grasps at the connection between time and space where seemingly disparate experiences join together.

Ridge Lines runs from April 4 to May 19. Find more from Lee from Galerie Sept.

 

a dense line drawing in blue and white that appears like crashing waves

“Collision” (2024), 100 x 130 x 3.5 centimeters

a dense line drawing in green and white that meet at a central ridge

“Contemplation Vert” (2024), 147 x 80 x 3.5 centimeters

a dense line drawing in blue and white that meet at a central ridge

“Chemin Bleu” (2024), 130 x 100 x 3.5 centimeters

two dense line drawings in blue and white that meet at a central ridge

Left: “Chemins de Vie 2” (2024), 150 x 50 x 3.5 centimeters. Right: “Chemins de Vie 1” (2024), 150 x 50 x 3.5 centimeters

two works against a white wall in the artist's studio

The artist’s studio

a patchwork of blue and black ink line drawings

“Mémoire du Vent Bojagi” (2024), 195 x 130 x 3.5 centimeters

the artist is sitting on the wood floor drawing on paper with books and canvases surrounding her

The artist working in her studio

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $5 per month. The article Densely Heaving Lines Meet at Mountainous Junctures in Lee Hyun Joung’s Paintings appeared first on Colossal.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Karma



Click here to go see the bonus panel!

Hovertext:
I'm not saying I'm into sea-cows, I'm saying *if I were an elephant seal* I would be. Don't act weirded out.


Today's News:

Penny Arcade: 18 Things to Know about NFS Unbound

I know there’s lots of new games to play right now but for some reason none of them are especially calling to me. Instead I played NFS Unbound all weekend. I ended up building out my own class A car for the races yesterday and somehow stumbled onto a real monster. I honestly don’t know what I did but this Audi S5 Sportback was an absolute blast to drive and it destroyed my competition. Here’s a couple races I saved.

 

Penny Arcade: Venerable And Inscrutable

Now I'm on another plane, the 737 MAX, whose various debilitations and inopportune door-fallings-out resulted in the removal of Boeing's CEO just an hour or so ago. The only  reason he had this job in the first place is because of the 2019-2021 groundings launched his predecessor into space - a trip through the Wikipedia Page about the incident describes the shit this other asshole was exiled for, and it's utterly fucking nuts. It's hard to believe I'm just learning this details now, and the idea that it didn't result in an internal culture that would forestall their current troubles makes me think we need to bring back the fucking stocks.

Greater Fool – Authored by Garth Turner – The Troubled Future of Real Estate: The pivot?

It was summer of the year after Covid. In August of 2021 a buyer snatched a little house on Riverside Drive in the cozy, blue-collar, lift-lock-famous town of Peterborough. Here’s how the listing agent described it, during a local real estate feeding frenzy:

Quiet Street In A Small Enclave Of Homes, Steps To The Otonabee River, Parks, And Mere Minutes To Highway Access. Beautifully Maintained Grounds, Perennial Gardens And A Gorgeous In-Ground Pool. Walk-Out Basement As Well As A Side Entrance. Wonderful Sunroom To Enjoy Your Morning Coffee. This Bungalow Has Been Meticulously Maintained And Is Pride Of Home Ownership. Open Concept Main Level, Bamboo Floors, Very Private Lounge Area By The Pool To Enjoy Those Hot Days Outside! Lots Of Parking For Multiple Vehicles. It Has A/C, 2012 Furnace, Washer, Dryer, Fridge, Stove, 2016 Shingles. A Pre-Inspected Home.

Owners Barb and Tim were asking $535,000 for the modest raised bung. They got $662,000. In a bidding war with multiple offers (when a variable-rate mortgage was 2.1%) the buyer paid a premium of $132,000, or 25% more than the couple hoped for.

Here it is:

Something didn’t work out and the place went back on MLS a year later. The new guy listed for $639,000. So after commission (assuming 4%) he stood to lose about $50,000 (plus his closing costs, including land transfer tax). This time a Toronto-based realtor handled the property, and this was her pitch:

This bungalow has some amazing opportunities for a 2-bedroom income or your family with a walk-out basement as well as a side entrance. Wonderful sunroom to enjoy your morning coffee and read your favourite book. This bungalow has been well maintained with an open concept main level and bamboo floors, with a walkout from many areas of this home with a very private lounge area by the pool to enjoy those beautiful sunny days! Lots of parking for all your guests. It has a brand new carpet in the family room where you can enjoy those cold nights snuggled up by the gas fireplace. Steps to the Otonabee River, parks and mere minutes to highway access. Income potential/first-time home buyers.

No sale. So this month she relisted the place, expanding the marketing range to include most of Southern Ontario. “THIS IS A MUST-SEE!” the MLS copy yelled. And the asking price came down to $599,900.

So, the wee house near the river is now on the market for $63,000 less than it was ‘worth’ in 2021. After commission the seller, if he gets full price, would be out of pocket about a hundred thousand. Plus closing costs, improvements, carrying charges, taxes and legals.

We don’t know why he bought or why he needs to sell. Maybe he was a flipper or speculator, Maybe life changes brought on this difficult decision. Perhaps he just made a mistake, and felt forced to buy – at any cost and without condition – amid the smoke and heat of the crazy market that emerged from the germy world of the pandemic. Remember how hot and in-demand Bunnypatch places like Peterborough or Woodstock were back then? This blog warned GTA urban refugees that reality would come to bite them in the butt. And, lo, we are there.

Just a small example of a market in the throes of transition.

Yesterday’s post gave you a glimpse into the situation of a DT condo owner who regretted his decision. Today, a bungalow victim. Both apparently paid too much, did so in a time of excess, and now confront a new market reality. These are two properties added to a swelling total of listings across the country. As detailed here on the weekend, despite declining mortgage rates and the expectation of Bank of Canada cuts to come – as well as Spring – inventory is building and supply exceeding demand. So sales may well increase (as they are) with no guarantee prices will follow. The meme that prices will keep rising until millions of new houses are built is apparently bunk.

Here’s more evidence: in the massive GTA (population six million) almost 6o% of all properties changing hands thus far this year have done so for less than the asking price. That compares with 53% that sold below-ask in 2023 and 45% in 2022. HouseSigma says the situation is even more dramatic in Vancouver, where 72% of sales have been for less than the vendors wanted. In urban Toronto, data aggregator Scott Ingram reports one in six condos are now seeing asking-price drops as months of inventory grow.

What’s going on?

Buyers are far more cautious. More choice among available properties has reduced multiple offers and bidding wars. That has resulted in people making conditional offers – most notably upon financing. Lenders are well aware of what’s happening, with appraisals have been coming in light. That can strike panic in the heart of those who made a firm offer assuming the bank would hand over more than materializes.

For sellers, conditions mean uncertainty. Gone is their chance to dump a property at inflated pandemic valuations. The deal can’t go firm until the buyer gets that mortgage approval, or positive home inspection report. And with each passing day, more listings appear, increasing competition and choice. The buyer can always come back, asking for a reduction. And many apparently are. Or the deal can just die.

So, is Mr. Market quietly healing himself while politicians hand-wring and whimper? Maybe.

Is this the pivot point?

About the picture: “Greetings from Victoria,” writes Michael the famous video producer. “While out enjoying a lovely walk I came across a pup enjoying the afternoon. The frisbee is launched and an after 4 feet runs a 40 foot chase this puppy eased, calculated, leapt into the air and casually retook possession of what clearly belonged to it. The puppy was so skilled in the act that it was repeated many times. It was shear dumb luck for me to press the shutter while airborne.”

To be in touch or send a picture of your beast, email to ‘garth@garth.ca’.

 

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: “Free Dirt” by Artist Jay Wilkinson

Jay Wilkinson

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Jay Wilkinson’s Website

Jay Wilkinson on Instagram

Michael Geist: The Law Bytes Podcast, Episode 197: Divest, Ban or Regulate? – Anupam Chander on the Global Fight Over TikTok

New legislation making its way through the U.S. Congress has placed a TikTok ban back on the public agenda. The bill – which would lead to either a divestiture or ban – has passed the House of Representatives and is now headed to the Senate. On the Canadian front,  TikTok is already prohibited on government devices at the federal level alongside some provinces, the government has quietly conducted a national security review, and there are new calls to ban it altogether from the Canadian market. Anupam Chander is a law professor at Georgetown University and leading expert on the global regulation of new technologies. He joined the Law Bytes podcast several years ago when a TikTok ban was raised by the Trump Administration and he returns this week to discuss the latest developments and their broader implications.

The podcast can be downloaded here, accessed on YouTube, and is embedded below. Subscribe to the podcast via Apple Podcast, Google Play, Spotify or the RSS feed. Updates on the podcast on Twitter at @Lawbytespod.

Credits:

Today, What Will Happen to TikTok if the U.S. Bans the App, March 17, 2024

The post The Law Bytes Podcast, Episode 197: Divest, Ban or Regulate? – Anupam Chander on the Global Fight Over TikTok appeared first on Michael Geist.

Ideas: CBC Massey Lectures: Audience Q&A with Astra Taylor

Insecurity has become a "defining feature of our time," says 2023 CBC Massey lecturer Astra Taylor. She explores how rising inequality, declining mental health, and the threat of authoritarianism, originate from a social order built on insecurity. In this episode, Astra Taylor answers audience questions from the cross-Canada tour. *This episode originally aired on Nov. 27, 2023.

Penny Arcade: Venerable And Inscrutable

New Comic: Venerable And Inscrutable

The Shape of Code: A paper to forget about

Papers describing vacuous research results get published all the time. Sometimes they get accepted at premier conferences, such as ICSE, and sometimes they even win a distinguished paper award, such as this one appearing at ICSE 2024.

If the paper Breaking the Flow: A Study of Interruptions During Software Engineering Activities had been produced by a final year PhD student, my criticism of them would be scathing. However, it was written by an undergraduate student, Yimeng Ma, who has just started work on a Masters. This is an impressive piece of work from an undergraduate.

The main reasons I am impressed by this paper as the work of an undergraduate, but would be very derisive of it as a work of a final year PhD student are:

  • effort: it takes a surprisingly large amount of time to organise and run an experiment. Undergraduates typically have a few months for their thesis project, while PhD students have a few years,
  • figuring stuff out: designing an experiment to test a hypothesis using a relatively short amount of subject time, recruiting enough subjects, the mechanics of running an experiment, gathering the data and then analysing it. An effective experimental design looks very simply, but often takes a lot of trial and error to create; it’s a very specific skill set that takes time to acquire. Professors often use students who attend one of their classes, but undergraduates have no such luxury, they need to be resourceful and determined,
  • data analysis: the data analysis uses the appropriate modern technique for analyzing this kind of experimental data, i.e., a random effects model. Nearly all academic researchers in software engineering fail to use this technique; most continue to follow the herd and use simplistic techniques. I imagine that Yimeng Ma simply looked up the appropriate technique on a statistics website and went with it, rather than experiencing social pressure to do what everybody else does,
  • writing a paper: the paper is well written and the style looks correct (I’m not an expert on ICSE paper style). Every field converges on a common style for writing papers, and there are substyles for major conferences. Getting the style correct is an important component of getting a paper accepted at a particular conference. I suspect that the paper’s other two authors played a major role in getting the style correct; or, perhaps there is now a language model tuned to writing papers for the major software conferences.

Why was this paper accepted at ICSE?

The paper is well written, covers a subject of general interest, involves an experiment, and discusses the results numerically (and very positively, which every other paper does, irrespective of their values).

The paper leaves out many of the details needed to understand what is going on. Those who volunteer their time to review papers submitted to a conference are flooded with a lot of work that has to be completed relatively quickly, i.e., before the published paper acceptance date. Anybody who has not run experiments (probably a large percentage of reviewers), and doesn’t know how to analyse data using non-simplistic techniques (probably most reviewers) are not going to be able to get a handle on the (unsurprising) results in this paper.

The authors got lucky by not being assigned reviewers who noticed that it’s to be expected that more time will be needed for a 3-minute task when the subject experiences an on-screen interruption, and even more time when for an in-person interruption, or that the p-values in the last column of Table 3 (0.0053, 0.3522, 0.6747) highlight the meaningless of the ‘interesting’ numbers listed

In a year or two, Yimeng Ma will be embarrassed by the mistakes in this paper. Everybody makes mistakes when they are starting out, but few get to make them in a paper that wins an award at a major conference. Let’s forget this paper.

Those interested in task interruption might like to read (unfortunately, only a tiny fraction of the data is publicly available): Task Interruption in Software Development Projects: What Makes some Interruptions More Disruptive than Others?

new shelton wet/dry: the nature of luck

A ten-year scientific study into the nature of luck has revealed that, to a large extent, people make their own good and bad fortune. The results also show that it is possible to enhance the amount of luck that people encounter in their lives. [PDF]

The man who bought Pine Bluff, Arkansas

Life expectancy for the U.S. population in 2022 was 77.5 years, an increase of 1.1 years from 2021. The infant mortality rate was 560.4 infant deaths per 100,000 live births in 2022, an increase of 3.1% from the rate in 2021 (543.6).

In June 2023, a SpaceX rocket deployed a first-of-its-kind spacecraft designed to autonomously synthesize a drug — the HIV-AIDS medication ritonavir — while in Earth’s orbit.

What Happens to Google Maps When Tectonic Plates Move? (Earth’s tremors can tweak your GPS coordinates)

A Surprising Advantage of Vinyl

The trendy second-hand clothing market is huge and still growing – yet nobody is turning a profit

Peter Thiel, Jeff Bezos and Mark Zuckerberg are leading a parade of corporate insiders who have sold hundreds of millions of dollars of their companies’ shares this quarter, in a signal that recent stock market exuberance could be peaking. [FT | ungated]

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Schedule



Click here to go see the bonus panel!

Hovertext:
The real mystery is whether the last panel is right before or right after.


Today's News:

Disquiet: Kelly Moran’s Ice Breaker

This first appeared in the March 21, 2024, issue of the This Week in Sound email newsletter, also the newsletter’s 22nd Listening Post.

Just over a year into the pandemic, Kelly Moran marked most electronic music fans’ favorite annual holiday, April 14, in honor of the Aphex Twin song “Avril 14th,” with the requisite solo piano cover. She recorded her video with a camera that she set to look directly down on her keyboard, and at first all we see is the piano — even after the music starts playing. Magically, the keys move without anyone touching them, and then her hands — slender, sensual, nails gleaming colorfully — appear alongside the ghost accompaniment and flesh out her own version of the song. 

It turns out that she was performing on a Disklavier, on loan from Yamaha, the same instrument on which Aphex Twin reportedly recorded the original version. “Avril 14th” appeared on his 2001 album, Drukqs; Moran’s cover marked the 20th anniversary. 

More time has passed. In the years since that simple (if deceptively so) Aphex Twin experiment of hers, Moran has come to wield the Disklavier not just expertly but ferociously. She has pushed its feature set further. The instrument allows her to record parts and play along with them, and record that and play along with that. Her deep pandemic studies have yielded impossible, post-human music that is truly hyperactive, with chords that no human could accomplish on their lonesome in cadences no human could play for a prolonged period. The works are crystalline paradoxes at warp speed. It’s absolutely perfect that “Butterfly Phase,” the lead video for her forthcoming record, Moves in the Field (due out March 29), involves figure skating, because aesthetically that’s what Moran’s current music is: calisthenic, showy, muscular, and deeply competitive. (Regarding that last point, the title comes from the term in skating for the tests of a competitor’s abilities.) 

Both “Butterfly Phase” and another track, “Sodalis (II),” are available as previews in advance of the full album’s release:

https://kellymoran.bandcamp.com/album/moves-in-the-field

new shelton wet/dry: Carrying a baby

22.jpg

Man arrested after allegedly taking leg of pedestrian after train incident in Wasco

Women experience disruptions in their sleep patterns in the days leading up to and during their period (peri-menstrual phase), spending more time awake at night, with a lower proportion of time spent in bed that is asleep (lower sleep efficiency). During the peri-menstrual phase, women report heightened feelings of anger compared to other phases of their menstrual cycle. Sleep disturbances during the peri-menstrual phase correlate with reduced positive emotions such as calmness, happiness, and enthusiasm.

Pregnancy advances your ‘biological’ age — but giving birth turns it back — Carrying a baby creates some of the same epigenetic patterns on DNA seen in older people

Scientists Reveal a Healthier Way to Cook Broccoli — pulverized the broccoli, chopping it into 2-millimeter pieces to get as much myrosinase activity going as possible (remember, the activity happens when broccoli is damaged). […] then left alone for 90 minutes before being stir-fried for four minutes […] they didn’t test it but thought “30 minutes would also be helpful”

The bizarre world of people who see ‘demonic’ faces

Facial Recognition Technology and Human Raters Can Predict Political Orientation From Images of Expressionless Faces

How Spammers, Scammers and Creators Leverage AI-Generated Images on Facebook for Audience Growth [PDF]

In two court orders, the federal government told Google to turn over information on anyone who viewed multiple YouTube videos and livestreams. Privacy experts say the orders are unconstitutional.

How to Run a CIA Base in Afghanistan — Targeting officers are the officers at CIA who basically write the book on a specific target. They are analyzing all sorts of information coming in, whether it’s signals intelligence (SIGINT), HUMINT, open source, and they’re creating a profile of an individual or perhaps a terrorist group that CIA wants to go out and recruit a source from or within, and really helps the case officer think about how they approach an individual and perhaps where to find that individual overseas.

Why Is It So Hard to Build an Airport?

Global prediction of extreme floods in ungauged watersheds — Using AI and open datasets, we are able to significantly improve the expected precision, recall and lead time of short-term (0–7 days) forecasts of extreme riverine events.

Saturday Morning Breakfast Cereal: Saturday Morning Breakfast Cereal - Caveman



Click here to go see the bonus panel!

Hovertext:
This caption works for most comics, and makes them a lot nicer.


Today's News:

I'll be at CulpeperCon today, if you're in central Virginia and want to say hi.

MattCha's Blog: 2004 Dragon Tea House Yiwu Gushu vs 2006





I was really excited to try this 2004 Dragon Tea House Yiwu ($520.00 for 400g cake or $1.30/g) from TeasWeLike.  I love the 2006 Dragon Tea House Yiwu but for some reason it doesn’t agree with me.  I turns out this 2004 and me get along quite nicely…

The very compressed dry leaves smell of a dry peat sweet odour.

The rinsed leaf is a sweeter peaty odour.

First infusion has a creamy talc sweetness with faint woody background.  Nice chalk finish pure clean tastes.long finish with slight sandy sticky mouth and clear sweet creamy talc and almost fruity taste.

Second infusion has a woody creamy sweet taste.  Full almost dry chalky with pops of coco creamy almost fruity sweetness.

Third is left to cool and gives off pops of sweet fruit and creamy talc.  There is a peat dry woody faint base and a long creamy sweet returning that dissolves into fruity on the breath.  The mouthfeel is a chalky not dry but almost gripping undertone.  Long deep throat and sweet taste.  Sweep of happy peace feeling with notable light limbs feeling.  Heart slow beats with mind and chest open.

Fourth infusion has a woody sweet creamy taste there is a creamy sweet return that transitions to fruity sweetness.  Chalky almost gripping mouthfeel leaves a long sweet taste in the mouth.  Peaceful focus. Light limbs and body.



5th has a sweet fruity clear onset strong euphoria happy feel good Qi.  Strong floating feeling with face flushing.  

6th has a woody slight incense smoke sweet taste. Creamy sweet return that turns into fruits.  Strong floating limbs and euphoria peace with strong slowing heart beats.  

Next day I’m flash steeping 7th infusion it is mineral sweet with a creamy talc finish in the mouth.  Almost turns fruity sweet minutes later.  Clear tastes here.  Strong happy focusing Qi.

8th is a clogged pot…. Welcome back to teapot sessions!!!! It gives off a dense incense resin woody bitter with a sweet fruity taste.  Strong woody incense resin wood with strong sweet taste underneath.  Dense taste in this accidental minutes long steeping.

9th is flash steeping and comes out very fresh fruits sweetness.  Long fruity clear sweetness. Light body strong happy feel good qi. Silty mouthfeel with deepthroat sweetness.  Lots of fresh clear fruit taste.

10th is left to cool and taste like sweet wood silty smooth mouthfeel.  Nice peace out Qi feeling.

11th has lots of clear pure sweet fruity taste but also some sour taste developing underneath.

12 is cool but has a woody incense sweetness with a returning sweet bread finish.  There is some mineral taste and some coco taste as well.  Silty amouthfeeling with mouthwatering and deep throat.

13 … 14 th …. 15th has good stamina still giving of peace vibes and sweet tastes.  Woody resin, faint incense, faint coco.  Mild silty mouthfeel.

16th is in the long mug steeping which is quite woody incense and bitter with a sweet taste underneath.  Full chest feeling.  

This has very clear long sweet tastes, nice strong peace out Qi feeling with aslow heart beats and light limbs body feeling. The mouthfeel is layered with a silty mouthfeel that is sometimes almost gripping and stimulating but also mouthwatering- a nice compliment to the long clear fresh fruity tastes.  This puerh feels like aged tea but still maintains the fresh fruity nuances of its youth.  The description on the site says it’s smokier than the 2006 but I disagree- at least my cake has some infusions with slmost no smoke and other times it’s that aged out blended in background incense smoke.

Vs 2006 Dragon Tea House Yiwu Gushu

First infusion has a creamy smoke sweetness. The creaminess is more pronounced and sweet and the smoke is stronger and less background incense than 2004.  

Second has a lubricating mouthwatering sweetness with upfront smoke balance.  Thick lubricating mouthwatering oily feel in the mouth.  Very long cool creamy sweetness.  Strong chest thumbing and surging energy is mixed with euphoric calm.  Face numb and mind floating has much stronger excitatory effect in body and mind than 2004.  

Third infusion is left to cool and gives off a smoke mesquite BBQ long creamy sweet slight mild lips tightness with an oily almost pasty texture.  Long creamy almost bready sweeetness. 

Fourth is a creamy sweet balanced with smoke long creamy sweetness. Strong chest beats and energetic euphoria.  Strong in the mind sweet taste is long.

Fifth infusion has a creamy sweetness with faint smoke background nice silty chalky mouthfeeling.  Mind expanding and big chest beats. Of note the liquor or this 2006 is quite a bit darker than the much tighter compressed but two years older 2004.

Sixth infusion is left to cool and is very creamy sweet with chalky mouthfeel long sweet creamy taste. Less smoke background.  

Seventh again consistently sweet creamy with chalky mouthfeeling smoke becomes less and sweetness holds.

Eighth has a woody sweet long creamy sweetness it kind of just snowballs into the retuning sweetness, it’s nice and long mouthwatering with more relaxing spacy qi left in here.

Ninth … 10th … 11th soooo yummy sweet with background smoke is less and less but there starts developing a bit of acidity.



Comparison: the 2006 has spent two years in my drier cooler storage but feels much younger that the more aged feeling 2004.  The 2006 has more power and intensity to it overall- it’s more acidic, it’s gives of an energetic effect on the body and mind and is a bit harsh on the body at times,  its sweetness is also strong and its smoke is more out front.  The 2004 exerts a more mellow peaceful feeling with strong but slow heart beats airy light limbs.  Its sweet taste is very clear, pure, fresh, and long.  The mouthfeel is a more complex mouthfeeling.  You can see the difference in composition of the wet leaf.  The 2006 pours a much darker colour where the 2004 is still yellowish.  For me the 2004 is the superior puerh.

Just one more note on these Dragon Tea House Yiwus that Idont think is mentioned enough.  The storage is really quite brilliantly dry.  It really captures the light nuances of a Yiwu puerh brilliantly and makes it feel almost younger than it is.  This was apparent to me in this blind testing of the 2015 Dragon Tea House Yiwu.

Not sure if I will pick up another with this price tag … but I do quite like the experience…

Marco’s (Late Steeps) Tasting Notes

Peace


Daniel Lemire's blog: Passing recursive C++ lambdas as function pointers

In modern C++, as in many popular languages, you can create ‘lambdas’. Effectively, they are potentially anonymous function instances that you can create on the fly as you are programming, possibly inside another function. The following is a simple example.

auto return1 = [](int n) -> int { return 1; };

What about recursive functions? At first I thought you could do…

auto fact = [](int n) -> int {
 if (n == 0) {
   return 1;
 } else {
  return n * fact(n - 1);
 }
};

Sadly it fails. What seems to be happening is that while it recognizes the variable ‘fact’ within the definition of ‘fact’, it cannot use it without knowing its type. So you should specify the type of the ‘fact’ right away. The following will work:

std::function<int(int)> fact = [](int n) -> int {
  if (n == 0) {
    return 1;
  } else {
    return n * fact(n - 1);
  }
};

But using std::function templates may add complexity. For example, what if you have a function that takes a function as a parameter without using std::function, such as…

void print(int(*f)(int)) {
  for(int k = 1; k < 10; ++k) {
   std::cout << "Factorial of " << k << " is " << f(k) << std::endl;
  }
}

Then you would want to call print(fact), but it will not work directly. It may complain like so:

No known conversion from 'std::function' to 'int (*)(int)

So let us avoid the std::function as much as possible:

int (*factorial)(int) = [](int n) -> int {
  if (n == 0) {
    return 1;
  } else {
    return n * factorial(n - 1);
  }
};

And then everything works out fine:

    print(factorial); // OK

Let me finish with a word of caution: functional programming is sophisticated, but it has downsides. One potential downside is performance. Let us consider this conventional code:

int factorialc(int n) {
  if (n == 0) {
    return 1;
  } else {
    return n * factorialc(n - 1);
  }
}
int functionc() {
  return factorialc(10);
}

Most compilers should produce highly optimized code in such a scenario. In fact, it is likely that the returned value of ‘functionc’ gets computed a compile time. The alternative using lambdas might look as follows:

int (*lfactorial)(int) = [](int n) -> int {
  if (n == 0) {
    return 1;
  } else {
    return n * lfactorial(n - 1);
  }
};

int functionl() {
  return lfactorial(10);
}

Though the results will depend on your system, I would expect far less efficient code in general.

Thus when programming in C++,  if you use lambdas in performance critical code, run benchmarks or disassemble your function to make sure that you have, indeed, zero-cost abstraction.

My source code is available.

Credit: Thanks to Ca Yi, Yagiz Nizipli and many X users for informing this post.

Further reading: Recursive lambdas from C++14 to C++23

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: “You Are Doing Great” by Artist Jesse Zuo

Jesse Zuo

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Jesse Zuo on Instagram

Jesse Moynihan: Forming 378

Planet Lisp: Joe Marshall: Porting a Game from Java (update)

I didn’t expect anyone would be interested, so I just pushed the code that I had with little thought about anyone trying to use it. It turns out that some people actually wanted to run it, so I polished off some of the rough edges and made it easier to get working. Feel free to email me if you have questions or suggestions.

Schneier on Security: On Secure Voting Systems

Andrew Appel shepherded a public comment—signed by twenty election cybersecurity experts, including myself—on best practices for ballot marking devices and vote tabulation. It was written for the Pennsylvania legislature, but it’s general in nature.

From the executive summary:

We believe that no system is perfect, with each having trade-offs. Hand-marked and hand-counted ballots remove the uncertainty introduced by use of electronic machinery and the ability of bad actors to exploit electronic vulnerabilities to remotely alter the results. However, some portion of voters mistakenly mark paper ballots in a manner that will not be counted in the way the voter intended, or which even voids the ballot. Hand-counts delay timely reporting of results, and introduce the possibility for human error, bias, or misinterpretation.

Technology introduces the means of efficient tabulation, but also introduces a manifold increase in complexity and sophistication of the process. This places the understanding of the process beyond the average person’s understanding, which can foster distrust. It also opens the door to human or machine error, as well as exploitation by sophisticated and malicious actors.

Rather than assert that each component of the process can be made perfectly secure on its own, we believe the goal of each component of the elections process is to validate every other component.

Consequently, we believe that the hallmarks of a reliable and optimal election process are hand-marked paper ballots, which are optically scanned, separately and securely stored, and rigorously audited after the election but before certification. We recommend state legislators adopt policies consistent with these guiding principles, which are further developed below.

Schneier on Security: Licensing AI Engineers

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?

I have mixed feelings about the idea. I can see the appeal, but it never seemed feasible. I’m not sure it’s feasible today.

Schneier on Security: Google Pays $10M in Bug Bounties in 2023

BleepingComputer has the details. It’s $2M less than in 2022, but it’s still a lot.

The highest reward for a vulnerability report in 2023 was $113,337, while the total tally since the program’s launch in 2010 has reached $59 million.

For Android, the world’s most popular and widely used mobile operating system, the program awarded over $3.4 million.

Google also increased the maximum reward amount for critical vulnerabilities concerning Android to $15,000, driving increased community reports.

During security conferences like ESCAL8 and hardwea.io, Google awarded $70,000 for 20 critical discoveries in Wear OS and Android Automotive OS and another $116,000 for 50 reports concerning issues in Nest, Fitbit, and Wearables.

Google’s other big software project, the Chrome browser, was the subject of 359 security bug reports that paid out a total of $2.1 million.

Slashdot thread.

Planet Lisp: Eugene Zaikonnikov: EURISKO lives

When I wrote about EURISKO a few years before there hardly was an expectation of a follow-up. The system was a dusty legend with some cynical minds arguing whether it existed in the first place.

However, Lenat's death in August last year has unlocked his SAILDART archives account. This has led to a thrilling discovery of both AM and EURISKO sources by WhiteFlame. In a further development, seveno4 has managed to adapt EURISKO to run on Medley Interlisp.

While I marveled at the idea of discovery systems before I hadn't even considered ever running EURISKO myself as a possibility. Truly an Indiana Jones finding the Lost Ark moment. Yet this very low probability event has indeed happened, as documented in the video below. Rewind to 8:20 for the Medley run.

new shelton wet/dry: the truth

In the first case, a sex doll was mistaken for a corpse; in the second case, a corpse was mistaken for a doll […] the increasingly doll-like appearance of some people, e.g., through cosmetic surgery, will lead to a rise in such cases.

Memories from when you were a baby might not be gone — The mystery of “infantile amnesia” suggests memory works differently in the developing brain

Comedians reported significant levels of symptomatology for Generalized Anxiety Disorder (GAD) and Somatization Disorder, and they screened positive for alcohol and substance use problems at higher rates.

Much like biological species, languages spread, evolve, compete and even go extinct. To understand these mechanisms, physicists are applying their methods to linguistics, creating the interdisciplinary field of language dynamics

Only seven countries meet WHO air quality standard, research finds — Australia, Estonia, Finland, Grenada, Iceland, Mauritius and New Zealand. Puerto Rico, Bermuda and French Polynesia also fell within safe levels.

Michel Talagrand Wins Abel Prize for Work Wrangling Randomness

Can a classical computer tell if a quantum computer is telling the truth? Researchers in Austria say the answer is yes.

One of Mexico’s most powerful criminal groups runs call centers that offer to buy retirees’ vacation properties and then empty their bank accounts. Cartel employees posing as sales representatives call up timeshare owners, offering to buy their investments back for generous sums. They then demand upfront fees for anything from listing advertisements to paying government fines. [NY Times]

Mozart (2X Speed) & The Bible (Chinese) spinning around you

Planet Haskell: Tweag I/O: Evaluating retrieval in RAGs: a practical framework

Evaluation of Retrieval-Augmented Generation (RAG) systems is paramount for any industry-quality usage. Without proper evaluation we end up in the world of “it works on my machine”. In the realm of AI, this would be called “it works on my questions”.

Whether you are an engineer seeking to refine your RAG systems, are just intrigued by the nuances of RAG evaluation or are eager to read more after the first part of the series (Evaluating retrieval in RAGs: a gentle introduction) — you are in the right place.

This article equips you with the knowledge needed to navigate evaluation in RAGs and the framework to systematically compare and contrast existing evaluation libraries. This framework covers benchmark creation, evaluation metrics, parameter space and experiment tracking.

An experimental framework to evaluate RAG’s retrieval

Inspired by reliability engineering, we treat RAG as a system that may experience a failure of different parts. When the retrieval part is not working well, there is no context to give to its LLM component, thus no meaningful response: garbage in, garbage out.

Improving retrieval performance may be approached like a classic machine learning optimization by searching the state space of available parameters and selecting the ones that best fit an evaluation criteria. This approach can be classified under the umbrella of Evaluation Driven Development (EDD) and requires:

  1. Creating a benchmark
  2. Defining the parameter space
  3. Defining evaluation metrics
  4. Tracking experiments and results
evaluation quartet
Figure 1: Evaluation golden quartet.

Figure 2, below, provides a detailed view of the development loop governing the evaluation process:

  • The part on the left depicts user input (benchmarks and parameters).
  • The retrieval process on the right includes requests to the vector database, but also changes to the database itself: a new embedding model means a new representation of the documents in the vector database.
  • The final step involves evaluating retrieved documents using a set of evaluation metrics.

This loop is repeated until the evaluation metrics meet an acceptance criteria.

RAG tweag framework
Figure 2: Experiment, expand parameter space, repeat.

In the following sections we will cover the different components of the evaluation framework in more detail.

Building a benchmark

Building a benchmark is the first step towards a repeatable experimental framework. While it should contain at least a list of questions, the exact form of the benchmark depends on which evaluation metrics will be used and may consist in a list of any of the following:

  • Questions
  • Pairs of (question, answer)
  • Pairs of (question, relevant_documents)

Building a representative benchmark

Like collecting requirements for a product, we need to understand how the chatbot users are going to use the RAG system and what kind of questions they are going to ask. Therefore, it’s important to involve someone familiar with the knowledge base to assist in compiling the questions and identifying necessary resources. The collected questions should represent the user’s experience. A statistician would say that a benchmark should be a representative sample of questions. This allows to correctly measure the quality of the retrieval. For example, if you have an internal company handbook, you will most likely ask questions about the company goals or how some internal processes work and probably not ask about the dietary requirements of a cat (see Figure 3).

cats
Figure 3. The probability density function of questions that users might ask of corporate documentation.

Benchmark generation

Benchmark datasets can be collected through the following methods:

  • Human-created: A human creates a list of questions based on their knowledge of the documents base.
  • LLM-generated: Questions (and sometimes answers) are generated by an LLM using documents from the database.
  • Combined human and LLM: Human-provided benchmark questions are augmented with questions reformulated by LLMs.

The hard part in collecting a benchmark dataset is obtaining representative and varied questions. Human-generated benchmarks will have questions typically asked to the tool, but the volume of questions will be low. On the other hand, machine-generated benchmarks may be larger in scale but may not accurately reflect real user behavior.

Manually-created benchmarks

In the experiments we ran at Tweag, we used a definition of a benchmark where you not only have questions but you also have the expected output. This makes the benchmark a labeled dataset (more details on that in an upcoming blog post). Note here, that we do not give direct answers to the benchmark questions but we provide instead relevant documents, for example a list of web pages URLs containing the relevant information for certain questions. This formulation allows us to use classical ML measures like precision and recall. This is not the case for other benchmark creation options, which need to be evaluated with LLM-based evaluation metrics (discussed further in the corresponding section).

Here’s an example of the (question, relevant_documents) option:

("What is a BUILD file?", ["https://bazel.build/foo", "https://bazel.build/bar"])

Automating benchmark creation

It is possible to automate the creation of questions. Indeed MLflow, LlamaIndex and Ragas allow you to use LLMs to create questions of your documents base. Unlike questions created by humans, whether specifically for the benchmark or obtained from users, which result in smaller benchmarks, automating allows for scaling and larger benchmarks. LLM-generated questions lack the complexity of human questions, however, and are typically based on a single document. Moreover, they do not represent the typical usage over the documents base (after all, not all questions are created equal) and classical ML measures are not directly applicable.

Reformulating questions with LLMs

Another way to artificially augment a benchmark consists of reformulating questions with LLMs. While this does not increase coverage over documents, it allows for a wider evaluation of the system. Note that if the benchmark associates answers or relevant documents to each question, these should be the same for reformulated questions.

A RAG-specific data model

What is the parameter search space for the best-performing retrieval?

A subset of the search space parameters is connected with the way documents are represented in the vector database:1

  • The embedding model and its parameters.
  • The chunking method, for example RecursiveCharacterTextSplitter and the parameters of this chunking model, like chunk_size.

Another subset of the search space parameters is connected to how we search the database and preprocess the data:

  • The top_k parameter, representing top k matching results.

  • The preprocessing_model, a function that takes a query sent by the RAG user and cleans it up before performing search on the vector database. The preprocessing function is useful for queries like:

    Please give me a table of Modus departments ordered by the number of employees.

    Where it is better for the query sent to the vector database to contain:

    Modus departments with number of employees

    as the “table” part of the query is about formatting the resulting output and not the semantic search.

The example below shows a JSON representation of the retrieval configuration:

"retrieval": {
       "collection_name": "default",
       "embedding_model": {
           "name": "langchain.embeddings.SentenceTransformerEmbeddings",
           "parameters": { "model_name": "all-mpnet-base-v2" }
       },
       "chunking_model": {
           "name": "langchain.text_splitter.RecursiveCharacterTextSplitter",
           "parameters": { "chunk_size": 500, "chunk_overlap": 5 }
       },
       "top_k": 10,
       "preprocessing_model": {
           "name": ""
       }
   },

Evaluation metrics

The first set of evaluation metrics we would like to present has roots in the well-established field of Information Retrieval. Given a set of documents retrieved from the vector database and a ground truth of documents that should have been retrieved, we can compute information retrieval measures, including but not limited to:

For more details and a discussion of other RAG-specific evaluation metrics, including those computed with the help of LLMs, have a look at our first blog post in the RAG series.

The measure you choose should best fit your evaluation objective. For example, it may be the mean value of recall computed over the questions in the ground truth dataset.

Experiment tracking

What information about the experiment do we need to track to make it reproducible? Retrieval parameters, for sure! But this is not enough. The choice of the vector database, the benchmark data, the version of the code you use to run your experiment, among others, all have a say in the results.

If you’ve done some MLOps before, you can see that this is not a new problem. And fortunately, frameworks for data and machine learning like MLflow and DVC as well as version controlled code make tracking and reproducing experiments possible.

MLFlow allows for experiment tracking, including logging parameters, saving results as artifacts, and logging computed metrics, which can be useful for comparing different runs and models.

DVC (Data Version Control) can be used to keep track of the input data model parameters and of databases. Combined with git it allows for “time travelling” to a different version of the experiment.

We also used ChromaDB as a vector database. The “collections” feature is particularly useful to manage different vector representations (chunking and embedding) of text data in the same database.

Note that a good best practice is also to save the retrieved references (for example in a JSON file), to make it easy for inspection and sharing.

Limitations

Similar to training a classical ML model, the evaluation framework outlined in this post carries the risk of overfitting, where you adjust your model’s parameters based solely on the training set. An intuitive solution is to divide the dataset into training and testing subsets. However, this isn’t always feasible. Human-generated datasets tend to be small, as human resources do not scale efficiently. This problem might be alleviated by using LLM-assisted generation of a benchmark.

Summary

In this blog post, we proposed an alternative to the problematic “eye-balling” approach to RAG evaluation: a systematic and quantitative retrieval evaluation framework.

We demonstrated how to construct it, beginning with the crucial step of building a benchmark dataset representative of real-world user queries. We also introduced a RAG-specific data model and evaluation metrics to define and measure different states of the RAG system.

This evaluation framework integrates the broader concepts from methodologies and best practices of Machine Learning, Software Development and Information Retrieval.

Leveraging this experimental framework with appropriate tools allows practitioners to enhance the reliability and effectiveness of RAGs, an essential pre-requisite for production-ready use.

Thanks to Simeon Carstens and Alois Cochard for their reviews of this article.


  1. An implicit assumption here is that semantic search and a vector database are in use, but the data model may be generalized to use keyword search as well.

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: Artist Spotlight: Emanuela Lekić

Emanuela Lekić

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Emanuela Lekić on Instagram

Planet Haskell: Well-Typed.Com: The Haskell Unfolder Episode 22: foldr-build fusion

Today, 2024-03-20, at 1930 UTC (12:30 pm PDT, 3:30 pm EST, 7:30 pm GMT, 20:30 CET, …) we are streaming the 22th episode of the Haskell Unfolder live on YouTube.

The Haskell Unfolder Episode 22: foldr-build-fusion

When composing several list-processing functions, GHC employs an optimisation called foldr-build fusion. Fusion combines functions in such a way that any intermediate lists can often be eliminated completely. In this episode, we will look at how this optimisation works, and at how it is implemented in GHC: not as built-in compiler magic, but rather via user-definable rewrite rules.

About the Haskell Unfolder

The Haskell Unfolder is a YouTube series about all things Haskell hosted by Edsko de Vries and Andres Löh, with episodes appearing approximately every two weeks. All episodes are live-streamed, and we try to respond to audience questions. All episodes are also available as recordings afterwards.

We have a GitHub repository with code samples from the episodes.

And we have a public Google calendar (also available as ICal) listing the planned schedule.

Planet Lisp: Joe Marshall: Porting a Game from Java

I decided to learn about games, so I followed along with a tutorial by Kaarin Gaming. His tutorial was written in Java, but of course I used Common Lisp. I made no attempt to faithfully replicate his design, but I followed it closely in some places, less so in others. The resulting program was more or less a port of the Java program to Common Lisp, so it not very remarkable in and of itself. Certainly I don’t expect many people to be interested in reading beyond this point.

It’s known that Java is wordy language and it shows in the end result. The tutorial had 3712 lines of Java code in 39 files and the equivalent Common Lisp was 2255 lines in 21 files. A typical Common Lisp file would contain more code than a typical Java file. It was often the case that a Common Lisp file would contain multiple classes.

Both versions used separate render and game mechanics threads. The render thread ran at about 60 frames per second where the game mechanics ran at 200 steps per second. The threads were mostly independent, but the game mechanics would occasionally have to query the render thread to find out whether an animation had completed or what frame the animation was on in order to synchronize attacks with reactions.

There were a couple of notable differences in the two implementations. The Java implementation would advance animation frames imperatively by incrementing the animation frame counter every few rendering cycles. The Common Lisp implementation would instead compute the animation frame functionally by subtracting the current time from the animation start time and dividing by the ticks per animation frame. In Common Lisp, different animation effects could be achieved by changing how the animation frame was computed. If you computed the frame number modulo the number of frames in the animation, you’d get an animation loop. If you clamped the frame number, you get a one-shot animation.

CLOS made some things easier. An :after method on the (setf get-state) of an entity would set the animation. The get-y method on some objects would (call-next-method) to get the actual y position and then add a time varying offset to make the object “float” in mid air. The get-x method on projectiles would would (call-next-method) to get the starting x position and then add a factor of the current ticks. This causes projectiles to travel uniformly horizontally across the screen. I often used the ability of CLOS to specialize on some argument other than the first one just to make the methods more readable.

The Common Lisp code is at http://github.com/jrm-code-project/Platformer, while the Java code is at https://github.com/KaarinGaming/PlatformerTutorial. Being a simple port of the Java code, the Common Lisp code is not exemplary, and since I didn’t know what I was doing, it is kludgy in places. The Common Lisp code would be improved by a rewrite, but I’m not going to. I was unable to find a sound library for Common Lisp that could play .wav files without a noticable delay, so adding sound effects to the game is sort of a non-starter. I think I’ve gotten what I can out of this exercise, so I’ll likely abandon it now.

CreativeApplications.Net: What is for Sure – Studio Verena Bachl + Karsten Schuhl

The light installation 'What is for Sure' explores the relationship between space and time. By reinterpreting and manipulating the seemingly natural and chronological rhythm of light, the artwork addresses the phenomenon of chromatic changes in the sky caused by atmospheric scattering.

Submitted by: studio_verena_bachl
Category: Environment / MaxMSP / Member Submissions / Objects
Tags: / / / / / /
People: /

CreativeApplications.Net (CAN) is a community supported website. If you enjoy content on CAN, please support us by Becoming a Member. Thank you!

OCaml Weekly News: OCaml Weekly News, 19 Mar 2024

  1. dune 3.14
  2. Announcing OCaml Manila Meetups
  3. Outreachy internship demo session
  4. OCaml 4.14.2 released
  5. Docfd 3.0.0: TUI multiline fuzzy document finder
  6. Shape with us the New OCaml.org Community Area!
  7. Opam-repository: Updated documentation, retirement and call for maintainers
  8. DkCoder 0.1.0
  9. A Versatile OCaml Library for Git Interaction - Seeking Community Feedback
  10. Other OCaml News

new shelton wet/dry: Big Oil & Gas

8-hour time-restricted eating, a type of intermittent fasting, linked to a 91% higher risk of cardiovascular death, n=20,000

Studies have generated strong evidence for the link between the consumption of red and processed meat and negative health outcomes – particularly the risk of developing colorectal cancer. Despite evidence for the strength of this association, researchers haven’t yet worked out why this is the case. Could Genetics Influence Cancer Risk From Red and Processed Meats?

Scientists Engineer Cow That Makes Human Insulin Proteins in Its Milk

The many flavors of edible ants

Writing by hand, not typing, linked to better learning and memory

The plastic industry knowingly pushed recycling myth for decades and Evidence shows that Big Oil & Gas knew as early as the 1960s that their products would lead to climate change

The Nuclear Fallout Maps That Revealed a Contaminated Planet

AI-enabled marketing today accounts for nearly half (45%) of all advertising globally, and by 2032, AI will influence 90% of all ad revenue which is more than $1.3 trillion.

Last week, the Wall Street Journal published a 10-minute-long interview with OpenAI CTO Mira Murati, with journalist Joanna Stern […] When asked about what data was used to train Sora, OpenAI’s app for generating video with AI, Murati claimed it used publicly available data, and when Stern asked her whether it used videos from YouTube, Murati’s face contorted in a mix of confusion and pain before saying she “actually wasn’t sure about that.” [….] Altman’s fanciful claims include his kids “having more AI friends than human friends,” that human-level AI is “coming” without ever specifying when, that AI will replace 95% of tasks performed by marketing agencies, that ChatGPT will evolve in “uncomfortable ways,” that AI will kill us all

Bruno Mars Reportedly In $50 Million Of Debt With MGM Casino After Assuming Cocktails Were Complimentary

BOOOOOOOM! – CREATE * INSPIRE * COMMUNITY * ART * DESIGN * MUSIC * FILM * PHOTO * PROJECTS: “I Believe in Ghosts” by Artist Jamie L. Luoto

Jamie L. Luoto

Jamie Luoto is one of 60 artists and photographers featured in our latest book, Care. See more from “I Believe in Ghosts” below.

I believe in ghosts. I believe those who weren’t and won’t be believed. I see the ghosts that
haunt us.
This series of painted self-portraits give a first person survivor’s account to create a deeply
intimate encounter between artist, subject, and viewer. I bring to light the aftermath of sexual
assault by illuminating the lasting psychological impact of repeated sexual trauma on an
individual. The “me too” movement exposed the breadth of sexual violence — this work
reveals the depth to which these acts of violence and the accompanying dismissal, disbelief,
and shaming impact individuals.
In these works I explore my psyche and experience with complex post-traumatic stress
disorder (C-PTSD), blurring the line between external presentation and internal reality to bring
to light the unseen injuries of sexual trauma that invade and haunt mind and body. Symptoms
such as flashbacks, nightmares, dissociation, and intrusive thoughts and images are
suggested using mirrors, portraits within portraits, and phallic forms, such as condoms and
erect sheets, which materialize as ghosts throughout the work. At times the ghosts’ presence
takes on a swarm-like quality, suggesting a looming threat. Felines act as familiars; bearing
witness and embodying the subject’s emotional state.
Using the visual language of European masterworks, I manipulate the familiar to reframe
classic imagery and narratives. I invoke the spectacle of visual pleasure as a means to beguile
the viewer while encircling them with an undercurrent of perverse imagery. Together the
collective body of work functions as an installation intended to actualize an experience akin to
the inescapability of living in a body haunted by sexual trauma.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Jamie L. Luoto’s Website

Jamie L. Luoto on Instagram

Michael Geist: The Law Bytes Podcast, Episode 196: Vibert Jack on the Supreme Court’s Landmark Bykovets Internet Privacy Ruling

The federal government has struggled to update Canadian privacy laws over the past decade, leaving the Supreme Court as perhaps the leading source of privacy protection. In 2014, the court issued the Spencer decision, which affirmed a reasonable expectation of privacy in basic subscriber information and earlier this month it released the Bykovets decision, which extends the reasonable expectation of privacy to IP addresses. Vibert Jack is the litigation director of the BC Civil Liberties Association, which successfully intervened in the case. He joins the Law Bytes podcast to examine the case, including the evolution of Canadian law, the court’s analysis, and the implications of Bykovets for Internet privacy in Canada.

The podcast can be downloaded here, accessed on YouTube, and is embedded below. Subscribe to the podcast via Apple Podcast, Google Play, Spotify or the RSS feed. Updates on the podcast on Twitter at @Lawbytespod.

Credits:

CBC News, Police Need a Warrant to Get a Person’s IP Address, Supreme Court Rules

The post The Law Bytes Podcast, Episode 196: Vibert Jack on the Supreme Court’s Landmark Bykovets Internet Privacy Ruling appeared first on Michael Geist.

Ideas: CBC Massey Lectures | #5: Escaping the Burrow

Human beings will never be totally secure, especially not on a planet that has been destabilized. In Astra Taylor's final Massey Lecture, she offers some hope and solutions. Taylor suggests cultivating an ethic of insecurity — one that embraces our existential insecurity. The experience of insecurity, she says, can offer us a path to wisdom — a wisdom that can guide not only our personal lives but also our collective endeavours.

Ideas: CBC Massey Lectures | #1: Cura’s Gift

Insecurity has become a "defining feature of our time," says CBC Massey lecturer Astra Taylor. The Winnipeg-born writer and filmmaker explores how rising inequality, declining mental health, the climate crisis, and the threat of authoritarianism originate from a social order built on insecurity. In her first lecture, she explores the existential insecurity we can’t escape — and the manufactured insecurity imposed on us from above.

Planet Haskell: Haskell Interlude: 45: András Kovács

In this episode, András Kovács is being interviewed by Andres Löh and Matthias Pall Gissurarson. We learn how to go from economics to functional programming, how GHC's runtime system is superior to Rust's, the importance of looking at GHC's Core for spotting stray closures, and why staging might be the answer to all your optimisation problems.

Jesse Moynihan: Forming 377

Planet Haskell: Michael Snoyman: How I Stay Organized

When I describe the Yesod web framework, one of the terms I use is the boundary issue. Internally, I view Yesod as an organized, structured, strongly typed ecosystem. But externally, it's dealing with all the chaos of network traffic. For example, within Yesod, we have clear typing delineations between normal strings, HTML, and raw binary data. But the network layer simply throws around bytes for all three. The boundary issue in Yesod is the idea that, before chaotic, untyped, unorganized data enters the system, it has to be cleaned, sanitized, typed, and then ingested.

This represents my overall organizational system too. I've taken a lot of inspiration from existing approaches, notably Getting Things Done and Inbox Zero. But I don't follow any such philosophy dogmatically. If your goal in reading this blog post is to get organized, I'd recommend reading this, searching for articles on organization, and then determining how you'd like to organize your life.

The process

I like to think of chaotic versus ordered systems. Chaotic systems are sources of stuff: ideas, work items, etc. There are some obvious chaotic sources:

  • Mobile app notifications

  • Incoming emails

  • Phone calls

  • Signal/WhatsApp messages

I think most of us consider these kinds of external interruptions to be chaotic. It doesn't matter what you're in the middle of, the interruption happens and you have to choose how to deal with it. (Note: that may include ignoring it, or putting notifications on silent.)

However, there's another source of chaos, arguably more important than the above: yourself. When I'm sitting working on some code and a thought comes up, it's an internally-driven interruption, and often harder to shake than something external.

Taking heavy inspiration from Getting Things Done, my process is simple for this: record the idea and move on. There are of course caveats to that. If I think of something that demands urgent attention (e.g., "oh shoot I left the food on the stove") chaos will reign. But most of the time, I'm either working on something else, taking a shower, or kicking back reading a book when one of these ideas comes up. The goal is to get the idea into one of the ordered systems so I can let go of it and get back to what I was doing.

For me, my ordered systems are basically my calendar, my todo list, and various reminders from the tools that I use. I'll get into the details of that below.

Other people

How do you treat other people in a system like this? While I think in reality there's a spectrum, we can talk about the extremes:

  • Chaotic people: these are people who don't follow your rules for organization, and will end up randomizing you. This could be a demanding boss, a petulant child, or a telemarketer trying to sell you chaos insurance (I'm sure that's a thing). In these cases, I treat the incoming messages with chaos mode: jot down all work items/ideas, or simply handle them immediately.

  • Ordered people: these are people you can rely on to participate in your system. In an ideal world, this would include your coworkers, close friends and family, etc. With these people, you can trust that "they have the ball" is equivalent to writing down the reminders in your ordered systems.

That's a bit abstract, so let's get concrete. Imagine I'm on a call with a few other developers and we're dividing up the work on the next feature we're implementing. Alice takes work item A, Bob takes work item B, etc. Alice is highly organized, so I rely on her to record the work somewhere (personal todo list, team tracker, Jira... somewhere). But suppose Bob is... less organized. I'd probably either create the Jira issue for Bob and assign it to him, or put a reminder in my own personal systems to follow up and confirm that Bob actually recorded this.

You may think that this kind of redundancy is going overboard. However, I've had to use this technique often to keep projects moving forward. I try as much as possible to encourage others to follow these kinds of organized systems. Project management is, to a large extent, trying to achieve the same goal. But it's important to be honest about other people's capabilities and not rely on them being more organized than they're capable of.

As mentioned, no one is 100% on either the order or chaos side. Even the most chaotic person will often remember to follow up on the most important actions, and even the most ordered will lose track of things from time to time.

Tooling

Once you have the basic system in mind for organizing things, you need to choose appropriate tooling to make it happen. "Tooling" here could be as simple as a paper-and-pen you carry around and write everything down. However, given how bad my handwriting is and the fact that I'm perpetually connected to an electronic device of some kind, I prefer the digital approach.

My tooling choices for organization come down to the following:

Todoist

I use Todoist as my primary todo list application. I've been very happy with it, and the ability to have shared projects has been invaluable. My wife (Miriam, aka LambdaMom) and I use a shared Todoist project for managing topics like purchases for the house, picking up medicines at the pharmacy, filing taxes, etc. And yes, having my spouse be part of the "ordered world" is a wonderful thing. We've given the advice of shared todo lists to many of our friends.

One recommendation if you have a large number of tasks scheduled each day: leverage your todo app's mechanisms for setting priorities and times of day for performing a task. When you have 30 items to cover in a day, including things like "take allergy medicine in the afternoon" and similar, it's easy to miss urgent items. In Todoist, I regularly use the priority feature to push work items to the top.

Calendars

While todo lists track work items and deliverables, calendars track specific times when actions need to be taken: show up to a meeting, go to the doctor, etc. I don't think anyone's too surprised by the idea of using a calendar to stay organized.

Email

Email is another classic organization method. Email is actually a much better ordered system than many other forms of communication, since it has:

  • Unread: things that need to be processed and organized

  • Read in inbox: things that have gone through initial processing but require more work

  • Snooze: for me a killer feature. Plenty of emails do not require immediate attention. In the past I used to create Todoist items for following up on emails that needed more work. But snoozing email is now a common feature in almost every mail system I use, and I rely on it heavily.

Other chat apps

But most communication these days is not happening in email. We have work-oriented chat (like Slack) and personal chat applications (Signal, WhatsApp, etc). My approach to these is:

  • If the app provides a "remind me later" feature, I use it to follow up on things later.

  • If the app doesn't provide such a feature, I add a reminder to Todoist.

Technically I could use "mark as unread" in many cases too. However, I prefer not doing that. You may have noticed that, with the approaches above, you'll very quickly get to 0 active notifications in your apps: no emails waiting to be processed, no messages waiting for a response. You'll have snoozed emails pop up in the future, "remind me later" messages that pop up, and an organized todo list with all the things you need to follow up on.

Notifications and interruptions

This is an area I personally struggle in. Notifications from apps are interruptions, and with the methods above I'm generally able to minimize the impact of an interruption. However, minimizing isn't eliminating: there's still a context switch. Overall, there are two main approaches you can take:

  • Receive all notifications and interruptions and always process them. This makes sure you aren't missing something important and aren't blocking others.

  • Disable notifications while you're in "deep work" and check in occasionally. This allows better work time, but may end up dropping the ball on something important.

For myself, which mode I operate in depends largely on my role. When I'm working as an individual contributor on a codebase, it's less vital to respond immediately, and I may temporarily disable notifications. When I'm leading a project, I try to stay available to answer things immediately to avoid blocking people.

My recommendation here is:

  • Establish some guidelines with the rest of your team about different signaling mechanisms to distinguish between "please answer at some point when you have a chance" and "urgent top priority please answer right now." This can be separate groups/channels with different notification settings, a rule that urgent topics require a phone call, or anything else.

  • Try to use tools that are optimized for avoiding distractions. I've been particularly enamored with Twist recently, which I think nails a sweet spot for this. I'm hoping to follow up with a blog post on team communication tools. (That's actually what originally inspired me to write this post.)

Work organization

I've focused here on personal organization, and the tools I use for that. Organizing things at work falls into similar paradigms. Instead of an individual todo list, at work we'll use project management systems. Instead of tracking messages in WhatsApp, at work it might be Teams. For the most part, the same techniques transfer over directly to the work tools.

One small recommendation: don't overthink the combining/separating of items between work and personal. I went through a period trying to keep the two completely separate, and I've gone through periods of trying to combine it all together. At this point, I simply use whatever tool seems best at the time. That could be a Jira issue, or a Todoist item, or even "remind me later" on a Slack message.

As long as the item is saved and will come up later in a reasonable timeframe, consider the item handled for now, and rely on the fact that it will pop back up (in sprint planning, your daily todo list review, or a notification from Slack) when you need to work on it.

Emotions

A bit of a word of warning for people who really get into organization. It's possible to take things too far, and relate to all impediments to your beautifully organized life as interruptions/distractions/bad things. Sometimes it's completely legitimate to respond with frustration: getting an email from your boss telling you that requirements on a project changed is difficult to deal with, regardless of your organizational system. Having a telemarketer call in the middle of dinner is always unwanted.

But taken too far, a system like this can lead you to interpreting all external interruptions as negative. And it can allow you to get overly upset by people who are disrupting your system by introducing more chaos. Try to avoid letting defense of the system become a new source of stress.

Also, remember that ultimately you are the arbiter of what you will do. Just because someone has sent you an email asking for something doesn't mean you're obligated to create a todo item and follow up. You're free to say no, or (to whatever extent it's appropriate, polite, and professional) simply ignore such requests. You control your life, not your todo program, your inbox, or anyone who knows how to ask for something.

My recommendation: try to remember that this system isn't a goal unto itself. You're trying to make your life better by organizing things. You expect that you won't hit 100%, and that others will not be following the same model. Avoiding the fixation on perfection can make all the difference.

Further reading

For now, I'm just including one "further reading" link. Overall, I really like Todoist as an app, but appreciate even more the thought they put into how the app would tie into a real organizational system. This guide is a good example:

Beyond that, I'd recommend looking up getting things done and inbox zero as search terms. And as I find other articles (or people put them in the comments), I'll consider expanding the list.

The Shape of Code: Finding reports and papers on the web

What is the best way to locate a freely downloadable copy of a report or paper on the web? The process I follow is outlined below (if you are new to this, you should first ask yourself whether reading a scientific paper will produce the result you are expecting):

  1. Google search. For the last 20 years, my experience is that Google search is the best place to look first.

    Search on the title enclosed in double-quotes; if no exact matches are returned, the title you have may be slightly incorrect (variations in the typos of citations have been used to construct researcher cut-and-paste genealogies, i.e., authors copying a citation from a paper into their own work, rather than constructing one from scratch or even reading the paper being cited). Searching without quotes may return the desired result, or lots of unrelated matched. In the unrelated matches case, quote substrings within the title or include the first author’s surname.

    The search may return a link to a ResearchGate page without a download link. There may be a “Request full-text” link. Clicking this sends a request email to one of the authors (assuming ResearchGate has an address), who will often respond with a copy of the paper.

    A search may not return any matches, or links to copies that are freely available. Move to the next stage,

  2. Google Scholar. This is a fantastic resource. This site may link to a freely downloadable copy, even though a Google search does not. It may also return a match, even though a Google search does not. Most of the time, it is not necessary to include the title in quotes.

    If the title matches a paper without displaying a link to a downloaded pdf, click on the match’s “Cited by” link (assuming it has one). The author may have published a later version that is available for download. If the citation count is high, tick the “Search within citing articles” box and try narrowing the search. For papers more than about five years old, you can try a “Customer range…” to remove more recent citations.

    No luck? Move to the next stage,

  3. If a freely downloadable copy is available on the web, why doesn’t Google link to it?

    A website may have a robots.txt requesting that the site not be indexed, or access to report/paper titles may be kept in a site database that Google does not access.

    Searches now either need to be indirect (e.g., using Google to find an author web page, which may contain the sought after file), or targeted at specific cases.

It’s now all special cases. Things to try:

  • Author’s website. Personal web pages are common for computing-related academics (much less common for non-computing, especially business oriented), but often a year or two out of date. Academic websites usually show up on a Google search. For new (i.e., less than a year), when you don’t need to supply a public link to the paper, email the authors asking for a copy. Most are very happy that somebody is interested in their work, and will email a copy.

    When an academic leaves a University, their website is quickly removed (MIT is one of the few that don’t do this). If you find a link to a dead site, the Wayback Machine is the first place to check (try less recent dates first). Next, the academic may have moved to a new University, so you need to find it (and hope that the move is not so new that they have not yet created a webpage),

  • Older reports and books. The Internet Archive is a great resource,
  • Journals from the 1950s/1960s, or computer manuals. bitsavers.org is the first place to look,
  • Reports and conference proceedings from before around 2000. It might be worth spending a few £/$ at a second hand book store; I use Amazon, AbeBooks, and Biblio. Despite AbeBooks being owned by Amazon, availability/pricing can vary between the two,
  • A PhD thesis? If you know the awarding university, Google search on ‘university-name “phd thesis”‘ to locate the appropriate library page. This page will probably include a search function; these search boxes sometimes supporting ‘odd’ syntax, and you might have to search on: surname date, keywords, etc. Some universities have digitized thesis going back to before 1900, others back to 2000, and others to 2010.

    The British Library has copies of thesis awarded by UK universities, and they have digitized thesis going back before 2000,

  • Accepted at a conference. A paper accepted at a conference that has not yet occurred, maybe available in preprint form; otherwise you are going to have to email the author (search on the author names to find their university/GitHub webpage and thence their email),
  • Both CiteSeer and then Semantic Scholar were once great resources. These days, CiteSeer has all but disappeared, and Semantic Scholar seems to mostly link to publisher sites and sometimes to external sites.

Dead-tree search techniques are a topic for history books.

More search suggestions welcome.

Daniel Lemire's blog: Measuring your system’s performance using software (Go edition)

When programming software, we are working over an abstraction over a system. The computer hardware may not know about your functions, your variables, and your data. It may only see bits and instructions. Yet to write efficient software, the programmer needs to be aware of the characteristics of the underlying system. Thankfully, we can also use the software itself to observe the behavior of the system through experiments.

Between the software and the hardware, there are several layers such as the compilers, the operating system, and the hardware. A good programmer should take into account these layers when needed. A good programmer must also understand the behavior of their software in terms of these layers.

Benchmarks in Go

To measure the performance, we often measure the time required to execute some function. Because most functions are fast, it can be difficult to precisely measure the time that takes a function if we run it just once. Instead, we can run the function many times, and record the total time. We can then divide the total time by the number of executions. It can be difficult to decide how many times we should execute the function: it depends in part on how fast a function is. If a function takes 6 seconds to run, we may not want or need to run it too often. An easier strategy is to specify a minimum duration and repeatedly call a function until we reach or exceed the minimum duration.

When the function has a short execution time, we often call the benchmark a microbenchmark. We use microbenchmarks to compare different implementations of the same functionality or to better understand the system or the problem. We should always keep in mind that a microbenchmark alone cannot be used to justify a software optimization. Real-world performance depends on multiple factors that are difficult to represent in a microbenchmark.

Importantly, all benchmarks are affected by measurement errors, and by interference from the system. To make matters worse, the distribution of timings may not follow a normal distribution.

All programming languages provide the ability to run benchmarks. In Go, the tools make it easy to write benchmarks. You can import the testing package and create a function with the prefix Benchmark and a parameter of pointer type `testing.B. For example, the following program benchmarks the time required to compute the factorial of 10 as an integer:

package main

import (
    "fmt"
    "testing"
)

var fact int

func BenchmarkFactorial(b *testing.B) {
    for n := 0; n < b.N; n++ {
        fact = 1
        for i := 1; i <= 10; i++ {
            fact *= i
        }
    }
}

func main() {
    res := testing.Benchmark(BenchmarkFactorial)
    fmt.Println("BenchmarkFactorial", res)
}

If you put functions with such a signature (BenchmarkSomething(b *testing.B)) as part of your tests in a project, you can run them with the command go test -bench .. To run just one of them, you can specify a pattern such as go test -bench Factorial which would only run benchmark functions containing the word Factorial.

The b.N field indicates how many times the benchmark function runs. The testing package adjusts this value by increasing it until the benchmark runs for at least one second.

Measuring memory allocations

In Go, each function has its own ‘stack memory’. As the name suggests, stack memory is allocated and deallocated in a last-in, first-out (LIFO) order. This memory is typically only usable within the function, and it is often limited in size. The other type of memory that a Go program may use is heap memory. Heap memory is allocated and deallocated in a random order. There is only one heap shared by all functions.

With the stack memory, there is no risk that the memory may get lost or misused since it belongs to a specific function and can be reclaimed at the end of the function. Heap memory is more of a problem: it is sometimes unclear when the memory should be reclaimed. Programming languages like Go rely on a garbage collector to solve this problem. For example, when we create a new slice with the make function, we do not need to worry about reclaiming the memory. Go automatically reclaims it. However, it may still be bad for performance to constantly allocate and deallocate memory. In many real-world systems, memory management becomes a performance bottleneck.

Thus it is sometimes interesting to include the memory usage as part of the benchmark. The Go testing package allows you to measure the number of heap allocation made. Typically, in Go, it roughly corresponds to the number of calls to make and to the number of objects that the garbage collector must handle. The following extended program computers the factorial by storing its computation in dynamically allocated slices:

package main

import (
    "fmt"
    "testing"
)

var fact int

func BenchmarkFactorial(b *testing.B) {
    for n := 0; n < b.N; n++ {
        fact = 1
        for i := 1; i <= 10; i++ {
            fact *= i
        }
    }
}
func BenchmarkFactorialBuffer(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buffer := make([]int, 11)
        buffer[0] = 1
        for i := 1; i <= 10; i++ {
            buffer[i] = i * buffer[i-1]
        }
    }
    b.ReportAllocs()
}

func BenchmarkFactorialBufferLarge(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buffer := make([]int, 100001)
        buffer[0] = 1
        for i := 1; i <= 100000; i++ {
            buffer[i] = i * buffer[i-1]
        }
    }
    b.ReportAllocs()
}

func main() {
    res := testing.Benchmark(BenchmarkFactorial)
    fmt.Println("BenchmarkFactorial", res)
    resmem := testing.Benchmark(BenchmarkFactorialBuffer)
    fmt.Println("BenchmarkFactorialBuffer", resmem, resmem.MemString())
    resmem = testing.Benchmark(BenchmarkFactorialBufferLarge)
    fmt.Println("BenchmarkFactorialBufferLarge", resmem, resmem.MemString())
}

If you run such a Go program, you might get the following result:

BenchmarkFactorial 90887572             14.10 ns/op
BenchmarkFactorialBuffer 88609930               11.96 ns/op        0 B/op              0 allocs/op
BenchmarkFactorialBufferLarge     4408      249263 ns/op   802816 B/op         1 allocs/op

The last function allocates 802816 bytes per operation, unlike the first two. In this instance, if Go determines that data is not referenced after the function returns (a process called ‘escape analysis’), and if the amount of memory used is sufficiently small, it will avoid allocating the memory to the heap, preferring instead stack memory. In the case of the last function, the memory usage is too high, hence there is allocation on the heap rather than the stack.

Measuring memory usage

Your operating system provides memory to a running process in units of pages. The operating system cannot provide memory in smaller units than a page. Thus if you allocate memory in a program, it may either cost no additional memory if there are enough pages already; or it may force the operating system to provide more pages.

The size of a page depends on the operating system and its configuration. It can often vary between 4 kilobytes and 16 kilobytes although much larger pages are also possible (e.g., 1 gigabyte).

A page is a contiguous array of virtual memory addresses. A page may also represent actual physical memory. However, operating systems tend to only map used pages to physical memory. An operating system may provide a nearly endless supply of pages to a process, without ever mapping it to physical memory. Thus it is not simple to ask how much memory a program uses. A program may appear to use a lot of (virtual) memory, while not using much physical memory, and inversely.

The page size impacts both the performance and the memory usage. Allocating pages to a process is not free, it takes some effort. Among other things, the operating system cannot just reuse a memory page from another process as is. Doing so would be a security threat because you could have indirect access to the data stored in memory by another process. This other process could have held in memory your passwords or other sensitive information. Typically an operating system has to initialize (e.g., set to zero) a newly assigned page. Furthermore, mapping the pages to their actual physical memory also takes time. To accelerate the mapping process, modern systems often use a translation lookaside buffer to keep the map in the cache. When the translation lookaside buffer runs out of space a manual computation of the page mapping may be required, an expensive process. Thus large pages may improve the performance of some programs. However, large pages force the operating system to provide memory in larger chunks to a process, potentially wasting precious memory. You can write a Go program which prints out the page size of your system:

import (
    "fmt"
    "os"
)

func main() {
    pageSize := os.Getpagesize()
    fmt.Printf("Page size: %d bytes (%d KB)\n", pageSize, pageSize/1024)
}

Go makes it relatively easy to measure the number of pages allocated to a program by the operating system. Nevertheless, some care is needed. Because Go uses a garbage collector to free allocated memory, there might be a delay between the moment you no longer need some memory, and the actual freeing of the memory. You may force Go to call immediately the garbage collector with the function call runtime.GC(). You should rarely deliberately invoke the garbage collector in practice, but for our purposes (measuring memory usage), it is useful.

There are several memory metrics. In Go, some of the most useful are HeapSys and HeapAlloc. The first indicates how much memory (in bytes) has been given to the program by the operating system. The second value, which is typically lower indicates how much of that memory is actively in used by the program.

The following program allocates ever larger slices, and then ever smaller slices. In theory, the memory usage should first go up, and then go down:

package main

import (
    "fmt"
    "os"
    "runtime"
)

func main() {
    pageSize := os.Getpagesize()
    var m runtime.MemStats
    runtime.GC()
    runtime.ReadMemStats(&m)
    fmt.Printf(
        "HeapSys = %.3f MiB, HeapAlloc =  %.3f MiB,  %.3f pages\n",
        float64(m.HeapSys)/1024.0/1024.0,
        float64(m.HeapAlloc)/1024.0/1024.0,
        float64(m.HeapSys)/float64(pageSize),
    )
    i := 100
    for ; i < 1000000000; i *= 10 {
        runtime.GC()
        s := make([]byte, i)
        runtime.ReadMemStats(&m)
        fmt.Printf(
            "%.3f MiB, HeapSys = %.3f MiB, HeapAlloc =  %.3f MiB,  %.3f pages\n",
            float64(len(s))/1024.0/1024.0,
            float64(m.HeapSys)/1024.0/1024.0,
            float64(m.HeapAlloc)/1024.0/1024.0,
            float64(m.HeapSys)/float64(pageSize),
        )
    }
    for ; i >= 100; i /= 10 {
        runtime.GC()
        s := make([]byte, i)
        runtime.ReadMemStats(&m)
        fmt.Printf(
            "%.3f MiB, HeapSys = %.3f MiB, HeapAlloc =  %.3f MiB,  %.3f pages\n",
            float64(len(s))/1024.0/1024.0,
            float64(m.HeapSys)/1024.0/1024.0,
            float64(m.HeapAlloc)/1024.0/1024.0,
            float64(m.HeapSys)/float64(pageSize),
        )
    }
    runtime.GC()
    runtime.ReadMemStats(&m)
    fmt.Printf(
        "HeapSys = %.3f MiB, HeapAlloc =  %.3f MiB,  %.3f pages\n",
        float64(m.HeapSys)/1024.0/1024.0,
        float64(m.HeapAlloc)/1024.0/1024.0,
        float64(m.HeapSys)/float64(pageSize),
    )
}

The program calls os.Getpagesize() to get the underlying system’s memory page size in bytes as an integer, and assigns it to a variable pageSize. It declares a variable m of type runtime.MemStats, which is a struct that holds various statistics about the memory allocator and the garbage collector. The program repeatedly calls runtime.GC() to trigger a garbage collection cycle manually, which may free some memory and make it available for release. It calls runtime.ReadMemStats(&m) to populate the m variable with the current memory statistics. We can reuse the same variable m from call to call. The purpose of this program is to demonstrate how the memory usage of a Go program changes depending on the size and frequency of memory allocations and deallocations, and how the garbage collector and the runtime affect the memory release. The program prints the memory usage before and after each allocation, and shows how the m.HeapSys, m.HeapAlloc, and m.HeapSys / pageSize values grow and shrink accordingly.

If you run this program, you may observe that a program tends to hold on to the memory you have allocated and later released. It is partly a matter of optimization: acquiring memory takes time and we wish to avoid giving back pages only to soon request them again. It illustrates that it can be challenging to determine how much memory a program uses.

The program may print something like the following:

$ go run mem.go
HeapSys = 3.719 MiB, HeapAlloc =  0.367 MiB,  238.000 pages
0.000 MiB, HeapSys = 3.719 MiB, HeapAlloc =  0.367 MiB,  238.000 pages
0.001 MiB, HeapSys = 3.719 MiB, HeapAlloc =  0.383 MiB,  238.000 pages
0.010 MiB, HeapSys = 3.688 MiB, HeapAlloc =  0.414 MiB,  236.000 pages
0.095 MiB, HeapSys = 3.688 MiB, HeapAlloc =  0.477 MiB,  236.000 pages
0.954 MiB, HeapSys = 3.688 MiB, HeapAlloc =  1.336 MiB,  236.000 pages
9.537 MiB, HeapSys = 15.688 MiB, HeapAlloc =  9.914 MiB,  1004.000 pages
95.367 MiB, HeapSys = 111.688 MiB, HeapAlloc =  95.750 MiB,  7148.000 pages
953.674 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  954.055 MiB,  68332.000 pages
95.367 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  95.750 MiB,  68332.000 pages
9.537 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  9.914 MiB,  68332.000 pages
0.954 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  1.336 MiB,  68332.000 pages
0.095 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  0.477 MiB,  68332.000 pages
0.010 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  0.414 MiB,  68332.000 pages
0.001 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  0.383 MiB,  68332.000 pages
0.000 MiB, HeapSys = 1067.688 MiB, HeapAlloc =  0.375 MiB,  68332.000 pages
HeapSys = 1067.688 MiB, HeapAlloc =  0.375 MiB,  68332.000 pages

Observe how, at the very beginning and at the very end, over a third of a megabyte of memory (238 pages) is repeated as being in used. Furthermore, over 68,000 pages remain allocated to the program at the very, even though no data structure remains in scope within the main function.

Inlining

One of the most powerful optimization technique that a compile may do is function inlining: the compiler brings some of the called functions directly into the calling functions.

Go makes it easy to tell which functions are inlined. We can also easily request that the compiles does not inline by adding the line //go:noinline right before a function.

Let us consider this program which contains two benchmarks were we sum all odd integers in a range.

package main

import (
    "fmt"
    "testing"
)

func IsOdd(i int) bool {
    return i%2 == 1
}

//go:noinline
func IsOddNoInline(i int) bool {
    return i%2 == 1
}

func BenchmarkCountOddInline(b *testing.B) {
    for n := 0; n < b.N; n++ {
        sum := 0
        for i := 1; i < 1000; i++ {
            if IsOdd(i) {
                sum += i
            }
        }
    }
}

func BenchmarkCountOddNoinline(b *testing.B) {
    for n := 0; n < b.N; n++ {
        sum := 0
        for i := 1; i < 1000; i++ {
            if IsOddNoInline(i) {
                sum += i
            }
        }
    }
}

func main() {
    res1 := testing.Benchmark(BenchmarkCountOddInline)
    fmt.Println("BenchmarkCountOddInline", res1)
    res2 := testing.Benchmark(BenchmarkCountOddNoinline)
    fmt.Println("BenchmarkCountOddNoinline", res2)
}

In Go, the flag -gcflags=-m tells the compiler to report the main optimizations it does. If you call this program simpleinline.go and compile it with the command go build -gcflags=-m simpleinline.go, you may see the following:

$ go build -gcflags=-m simpleinline.go
./simpleinline.go:8:6: can inline IsOdd
./simpleinline.go:21:12: inlining call to IsOdd
...

If you run the benchmark, you should see that the inlined version is much faster:

$ go run simpleinline.go
BenchmarkCountOddInline  3716786           294.6 ns/op
BenchmarkCountOddNoinline  1388792         864.8 ns/op

Inlining is not always beneficial: in some instances, it can generate large binaries and it may even slow down the software. However, when it is applicable, it can have a large beneficial effect.

Go tries as hard as possible to inline functions, but it has limitations. For example, compilers often find it difficult to inline recursive functions. Let benchmark two factorial functions, one that is recursive, and one that is not.

package main

import (
    "fmt"
    "testing"
)

var array = make([]int, 1000)

func Factorial(n int) int {
    if n < 0 {
        return 0
    }
    if n == 0 {
        return 1
    }
    return n * Factorial(n-1)
}

func FactorialLoop(n int) int {
    result := 1
    for i := 1; i <= n; i++ {
        result *= i
    }
    return result
}

func BenchmarkFillNoinline(b *testing.B) {
    for n := 0; n < b.N; n++ {
        for i := 1; i < 1000; i++ {
            array[i] = Factorial(i)
        }
    }
}

func BenchmarkFillInline(b *testing.B) {
    for n := 0; n < b.N; n++ {
        for i := 1; i < 1000; i++ {
            array[i] = FactorialLoop(i)
        }
    }
}

func main() {
    res1 := testing.Benchmark(BenchmarkFillNoinline)
    fmt.Println("BenchmarkFillNoinline", res1)
    res2 := testing.Benchmark(BenchmarkFillInline)
    fmt.Println("BenchmarkFillInline", res2)
    fmt.Println(float64(res1.NsPerOp()) / float64(res2.NsPerOp()))
}

Though both FactorialLoop and Factorial are equivalent, running this program, you should find that the non-recursive function (FactorialLoop) is much faster.

Cache line

Our computers read and write memory using small blocks of memory called “cache lines”. The cache line size is usually fixed and small (e.g.,  64 or 128 bytes). To attempt to measure the cache-line size, we may use a strided copy. From a large array, we copy every N bytes to another large array. We repeat this process N times. Thus if the original array contains a 1000 bytes, we always copy 1024 bytes, whether r N = 1, N = 2, N = 4, or N = 8.

When N is sufficiently large (say N = 16), the problem should be essentially memory bound: the performance is not limited by the number of instructions, but by the system’s ability to load and store cache lines. If N is larger than twice the cache line, then I can effectively skip one cache line out of two. If N is smaller than the cache line, then every cache line must be accessed. You expect a large stride to be significant faster.

One limitation to this approach is that processors may fetch more cache lines than needed so we may overestimate the size of the cache line. However, unless memory bandwidth is overly abundant, we should expect processors to try to limit the number of cache lines fetched.

Let us run an experiment. For each stride size, we repeat 10 times and record the maximum, the minimum and the average. Consider the following program.

package main

import (
    "fmt"
    "time"
)

const size = 33554432 // 32 MB
func Cpy(arr1 []uint8, arr2 []uint8, slice int) {
    for i := 0; i < len(arr1); i += slice {
        arr2[i] = arr1[i]
    }
}

func AverageMinMax(f func() float64) (float64, float64, float64) {
    var sum float64
    var minimum float64
    var maximum float64

    for i := 0; i < 10; i++ {
        arr1 = make([]uint8, size)
        arr2 = make([]uint8, size)

        v := f()
        sum += v
        if i == 0 || v < minimum {
            minimum = v
        }
        if i == 0 || v > maximum {
            maximum = v
        }
    }
    return sum / 10, minimum, maximum
}

var arr1 []uint8
var arr2 []uint8

func run(size int, slice int) float64 {
    start := time.Now()
    times := 10
    for i := 0; i < times*slice; i++ {
        Cpy(arr1, arr2, slice)
    }
    end := time.Now()
    dur := float64(end.Sub(start)) / float64(times*slice)
    return dur
}

func main() {
    for slice := 16; slice <= 4096; slice *= 2 {
        a, m, M := AverageMinMax(func() float64 { return run(size, slice-1) })
        fmt.Printf("%10d: %10.1f GB/s [%4.1f - %4.1f]\n", slice-1, float64(size)/a, float64(size)/M, float64(size)/m)
    }
}

We may get the following result:

$ go run cacheline.go                                  1
        15:       23.6 GB/s [21.3 - 24.4]
        31:       24.3 GB/s [23.8 - 24.5]
        63:       24.2 GB/s [23.6 - 24.6]
       127:       26.9 GB/s [23.8 - 27.9]
       255:       40.8 GB/s [37.8 - 43.6]
       511:      162.0 GB/s [130.4 - 203.4]
      1023:      710.0 GB/s [652.0 - 744.4]
      2047:      976.1 GB/s [967.1 - 983.8]
      4095:     1247.4 GB/s [1147.7 - 1267.0]

We see that the performance increases substantially when the stride goes from 127 to 255. It suggests that the cache line has 128 bytes. If you run this same benchmark on your own system, you may get a different result.

The results need to be interpreted with care: we are not measuring a copy speed of 1247.4 GB/s. Rather, we can copy large arrays at such a speed if we only copy one byte out of every 4095 bytes.

CPU Cache

When programming, we often do not think directly about memory. When we do consider that our data uses memory, we often think of it as homogeneous: memory is like a large uniform canvas upon which the computer writes and reads it data. However, your main memory (RAM) is typically buffered using a small amount of memory that resides close to the processor core (CPU cache). We often have several layers of cache memory (e.g., L1, L2, L3): L1 is is typically small but very fast whereas, for example, L3 is larger but slower.

You can empirically measure the effect of the cache. If you take a small array and shuffle it randomly, will be moving data primarily in the CPU cache, which is fast. If you take a larger array, you will move data in memory without much help from the cache, a process that is much slower. Thus shuffling ever larger arrays is a way to determine the size of your cache. It may prove difficult to tell exactly how many layers of cache you have and how large each layer is. However, you can usually tell when your array is significantly larger than the CPU cache.

We are going to write a random shuffle function: Shuffle(arr []uint32). It uses an algorithm called Fisher-Yates shuffle, which involves going through the array in reverse and swapping each element with another randomly chosen from those preceding it. The function uses a seed variable to generate random numbers from a mathematical formula. For our purposes, we use a simplistic number generator: we multiply the seed by the index. The function bits.Mul64 calculates the product of two 64-bit numbers and returns the result as two 32-bit numbers: the most significant (hi) and the least significant. The most significant value is necessarily between 0 and i (inclusively). We use this most significant value as the random index. The function then exchanges the elements using multiple assignment. We call this shuffle function several times, on inputs of different sizes. We report the time normalized by the size of the input.

package main

import (
    "fmt"
    "math/bits"
    "time"
)

func Shuffle(arr []uint32) {
    seed := uint64(1234)
    for i := len(arr) - 1; i > 0; i-- {
        seed += 0x9E3779B97F4A7C15
        hi, _ := bits.Mul64(seed, uint64(i+1))
        j := int(hi)
        arr[i], arr[j] = arr[j], arr[i]
    }
}

func AverageMinMax(f func() float64) (float64, float64, float64) {
    var sum float64
    var minimum float64
    var maximum float64

    for i := 0; i < 10; i++ {
        v := f()
        sum += v
        if i == 0 || v < minimum {
            minimum = v
        }
        if i == 0 || v > maximum {
            maximum = v
        }
    }
    return sum / 10, minimum, maximum
}

func run(size int) float64 {
    arr := make([]uint32, size)

    for i := range arr {
        arr[i] = uint32(i + 1)
    }
    start := time.Now()
    end := time.Now()
    times := 0
    for ; end.Sub(start) < 100_000_000; times++ {
        Shuffle(arr)
        end = time.Now()
    }
    dur := float64(end.Sub(start)) / float64(times)
    return dur / float64(size)
}

func main() {
    for size := 4096; size <= 33554432; size *= 2 {
        fmt.Printf("%20d KB ", size/1024*4)
        a, m, M := AverageMinMax(func() float64 { return run(size) })
        fmt.Printf(" %.2f [%.2f, %.2f]\n", a, m, M)
    }
}

A possible output of running this program might be:

⚡  go run cache.go 
                  16 KB  0.70 [0.66, 0.93]
                  32 KB  0.65 [0.64, 0.66]
                  64 KB  0.64 [0.64, 0.66]
                 128 KB  0.64 [0.64, 0.67]
                 256 KB  0.65 [0.64, 0.66]
                 512 KB  0.70 [0.70, 0.71]
                1024 KB  0.77 [0.76, 0.79]
                2048 KB  0.83 [0.82, 0.84]
                4096 KB  0.87 [0.86, 0.90]
                8192 KB  0.92 [0.91, 0.95]
               16384 KB  1.10 [1.06, 1.24]
               32768 KB  2.34 [2.28, 2.52]
               65536 KB  3.90 [3.70, 4.25]
              131072 KB  5.66 [4.80, 9.78]

We see between 16 KB and 16384 KB, the time per element shuffle does not increase much even though we repeatedly double the input size. However, between 16384 KB and 32768 KB, the time per element doubles. And then it consistently doubles each time the size of the array doubles. It suggests that the size of the CPU cache is about 16384 KB in this instance.

Memory bandwidth

You can only read and write memory up to a maximal speed. It can be difficult to measure such limits. In particular, you may need several cores in a multi-core system to achieve the best possible memory. For simplicity, let us consider maximal read memory.

Many large systems do not have a single bandwidth number. For example, many large systems rely on NUMA: NUMA stands for Non-Uniform Memory Access. In a NUMA system, each processor has its own local memory, which it can access faster than the memory of other processors.

The bandwidth also depends to some extend on the amount of memory requested. If the memory fits in CPU cache, only the first access may be expensive. A very large memory region may not fit in RAM and may require disk storage. Even if it fits in RAM, an overly large memory region might require many memory pages, and accessing all of them may cause page walking due to the limits of the translation lookaside buffer.

If the memory is accessed at random locations, it might be difficult for the system to sustain a maximal bandwidth because the system cannot predict easily where the next memory load occurs. To get the best bandwidth, you may want to access the memory linearly or according to some predictable pattern.

Let us consider the following code:

package main

import (
    "fmt"
    "time"
)

func run() float64 {
    bestbandwidth := 0.0
    arr := make([]uint8, 2*1024*1024*1024) // 4 GB
    for i := 0; i < len(arr); i++ {
        arr[i] = 1
    }
    for t := 0; t < 20; t++ {
        start := time.Now()
        acc := 0
        for i := 0; i < len(arr); i += 64 {
            acc += int(arr[i])
        }
        end := time.Now()
        if acc != len(arr)/64 {
            panic("!!!")
        }
        bandwidth := float64(len(arr)) / end.Sub(start).Seconds() / 1024 / 1024 / 1024
        if bandwidth > bestbandwidth {
            bestbandwidth = bandwidth
        }
    }
    return bestbandwidth
}

func main() {
    for i := 0; i < 10; i++ {
        fmt.Printf(" %.2f GB/s\n", run())
    }
}

The code defines two functions: run and main. The main function is the entry point for the program, and it calls the run function 10 times, printing the result each time. The run function is a custom function that measures the memory bandwidth of the system. It does this by performing the following steps:

It declares a variable called bestbandwidth and initializes it to 0.0. This variable stores the highest bandwidth value obtained during the execution of the function. It creates a slice of bytes (uint8) called arr, with a length equivalent to 4 GB. The slice is initialized with 1s. The loop will only access every 64th element of the slice, skipping the rest. Given that most systems have a cache-line size of 64 bytes or more, it is enough to touch each cache line. It calculates the bandwidth by dividing the size of the slice (in bytes) by the difference between the end and start times (in seconds), and then dividing by 1024 three times to convert the result to gigabytes per second (GB/s). The code repeats the measurement 20 times and returns the best result, to account for possible variations in the system performance. The code prints the result 10 times, to show the consistency of the measurement.

Memory latency and parallelism

Latency is often described as the time delay between the beginning of a request and the moment when you are served. Thus if you go to a restaurant, the latency you might be interested in is the time it will take before you can start eating. The latency is distinct from the throughput: a restaurant might be able to serve hundreds of customers at once, but still have high latency (long delays for each customer). If you put a lot of data on a very large disk, you can put this disk in a truck and drive the truck between two cites. It could represent a large bandwidth (much data is moved per unit of time), but the latency could be quite poor (hours). Similarly, you could shine a laser at your partner when supper is ready: the information could arrive without much delay even if you are very far away, but you are communicating little information (low throughput). One way to express this trade-off between latency and throughput is with Little’s Law: L = λW where L is the average number of elements in the system, λ is the throughput (long-term average arrival rate of new elements), and W is the latency, or the average amount of time that elements spend waiting. Thus if you want to have L customers at all times in your restaurant, and fewer customers arrive, you should serve the customers with greater delays. And so forth. Little’s law work with our memory subsystems as well: computers can sustain a maximum number of memory requests, each memory request has a latency, and there is an overall bandwidth. If latency does not improve, we can still improve bandwidth or throughput by increasing the number of requests that can be sustained concurrently. Unfortunately, system designers are often forced to make this choice, and so it is not common to see stagnant or worsening memory latencies despite fast improving memory bandwidths. A common illustration of the concept of memory latency is the traversal of a linked list. In computer science, a linked list is a data structure made of nodes, and each node is linked (by a pointer) to the next node. The nodes may not be laid out in memory consecutively, but even if they are, accessing each successive node requires a at least a small delay. On current processors, it can often take at least 3 cycles to load data from memory, even if the memory is in cache. Thus determining the length of the list by traversing the whole linked list can take time, and most of this time is just made of the successive delays. The following code benchmarks the time required to traverse a linked list made of a million nodes. Though the time varies depending on your system, it may represent a sizeable fraction of a millisecond.

package main

import (
    "fmt"
    "testing"
)

type Node struct {
    data int
    next *Node
}

func build(volume int) *Node {
    var head *Node
    for i := 0; i < volume; i++ {
        head = &Node{i, head}
    }
    return head
}

var list *Node
var N int

func BenchmarkLen(b *testing.B) {
    for n := 0; n < b.N; n++ {
        len := 0
        for p := list; p != nil; p = p.next {
            len++
        }
        if len != N {
            b.Fatalf("invalid length: %d", len)
        }
    }
}

func main() {
    N = 1000000
    list = build(N)
    res := testing.Benchmark(BenchmarkLen)
    fmt.Println("milliseconds: ", float64(res.NsPerOp())/1e6)

    fmt.Println("nanoseconds per el.", float64(res.NsPerOp())/float64(N))
}

In this code, a Node struct is defined with two fields: data is an integer representing the value stored in the node, `next is a pointer to the next node in the linked list. We could also add a pointer to the previous node, but that is not necessary in our case. The build function creates a singly linked list of nodes from an integer volume as an argument. It initializes an empty linked list (head is initially nil). It iterates from 0 to volume-1, creating a new node with value i and pointing its next to the current head. The new node becomes the new head. The function returns the final head of the linked list. The main function initializes two global variables (list and N) storing respectively the head of the list and the expected length. These values h are used by the BenchmarkLen function. This code demonstrates how to create a linked list, calculate its length, and benchmark the performance of the length calculation. Our length computation is almost entirely bounded (limited) by the memory latency, the time it takes to access the memory. The computations that we are doing (comparisons, increments) are unimportant to the performance. To illustrate our observation, we can try traversing two linked lists simultaneously, as in this example:

package main

import (
    "fmt"
    "testing"
)

type Node struct {
    data int
    next *Node
}

func build(volume int) *Node {
    var head *Node
    for i := 0; i < volume; i++ {
        head = &Node{i, head}
    }
    return head
}

var list1 *Node
var list2 *Node

var N int

func BenchmarkLen(b *testing.B) {
    for n := 0; n < b.N; n++ {
        len := 0
        for p1, p2 := list1, list2; p1 != nil && p2 != nil; p1, p2 = p1.next, p2.next {
            len++
        }
        if len != N {
            b.Fatalf("invalid length: %d", len)
        }
    }
}

func main() {
    N = 1000000
    list1 = build(N)
    list2 = build(N)

    res := testing.Benchmark(BenchmarkLen)
    fmt.Println("milliseconds: ", float64(res.NsPerOp())/1e6)

    fmt.Println("nanoseconds per el.", float64(res.NsPerOp())/float64(N))
}

If you run this new code, you might find that the benchmark results are close to the single-list ones. It is not surprising: the processor is mostly just waiting for the next node, and waiting for two nodes is not much more expensive. For this reason, when programming, you should limit memory accesses as much as possible. Use simple arrays when you can instead of linked lists or node-based tree structures. We would would like to work with arbitrarily large data structures, so that we can stress the memory access outside of the cache. Sattolo’s algorithm is a variant of the well-known random shuffle that generates a random cyclic permutation of an array or list. Sattolo’s algorithm ensures that the data is permuted using a single cycle. That is, starting with one element in a list of size n, we find that this element is moved to another position, which is itself moved to another position, and so forth, until after n moves, we end up back at our initial position. To apply Sattolo’s algorithm, given an array or list of elements, we start with an index i from 0 to n-1, where n is the length of the array. For each index i, we choose a random index j such that i < j < n. We swap the elements at indices i and j. E.g., suppose we have an array [0, 1, 2, 3, 4]. The algorithm might produce a cyclic permutation like [2, 0, 3, 1, 4]. With this algorithm, we can visit all values in an array exactly once in random order. From an array contain indexes 0 to n-1 permuted with Sattolo’s algorithm, we first load the first element, read its value, move to the corresponding index, and so forth. After `n operation, we should come back at the initial position. Because each operation involves a memory load, it is limited by memory latency. We can try to go faster with memory-level parallelism: we can pick k positions spread out in the cycle and move from these k initial positions n/k times through the cycle. Because computers can load many values in parallel, this algorithm should be faster for larger values of k. However, as k increases, we may see fewer and fewer gains because systems have limited memory-level parallelism and bandwidth.The following program implements this idea.

package main

import (
    "fmt"
    "math/rand"
    "time"
)

// makeCycle creates a cycle of a specified length starting at element 0
func makeCycle(length int) ([]uint64, []uint64) {
    array := make([]uint64, length)
    index := make([]uint64, length)
    // Create a cycle of maximum length within the big array
    for i := 0; i < length; i++ {
        array[i] = uint64(i)
    }

    // Sattolo shuffle
    for i := 0; i+1 < length; i++ {
        swapIdx := rand.Intn(length-i-1) + i + 1
        array[i], array[swapIdx] = array[swapIdx], array[i]
    }

    total := 0
    cur := uint64(0)
    for cur != 0 {
        index[total] = cur
        total++
        cur = array[cur]
    }
    return array, index
}

// setupPointers sets up pointers based on the given index
func setupPointers(index []uint64, length, mlp int) []uint64 {
    sp := make([]uint64, mlp)
    sp[0] = 0

    totalInc := 0
    for m := 1; m < mlp; m++ {
        totalInc += length / mlp
        sp[m] = index[totalInc]
    }
    return sp
}

func runBench(array []uint64, index []uint64, mlp int) time.Duration {
    length := len(array)
    sp := setupPointers(index, length, mlp)
    hits := length / mlp
    before := time.Now()
    for i := 0; i < hits; i++ {
        for m := 0; m < mlp; m++ {
            sp[m] = array[sp[m]]
        }
    }
    after := time.Now()
    return after.Sub(before)
}

func main() {
    const length = 100000000
    array, index := makeCycle(length)
    fmt.Println("Length:", length*8/1024/1024, "MB")
    base := runBench(array, index, 1)
    fmt.Println("Lanes:", 1, "Time:", base)

    for mlp := 2; mlp <= 40; mlp++ {
        t := runBench(array, index, mlp)
        fmt.Println("Lanes:", mlp, "Speedup:", fmt.Sprintf("%.1f", float64(base)/float64(t)))
    }
}

The function makeCycle creates a cycle of a specified length starting at element 0. It initializes two slices: array and index, both of type []uint64. The array slice represents the elements in the cycle. The index slice stores the indices of the elements in the cycle, so that we can more easily access a position in the cycle. The function performs the following steps. It initializes array with values from 0 to length-1. It applies Sattolo’s shuffle algorithm to the array to create a random permutation. The function returns both array and index. The function setupPointers: the function calculates the increment value (totalInc) based on the length and the number of lanes (mlp). It assigns the indices from index to sp based on the calculated increments. The function runBench benchmarks the execution time for a given number of lanes (mlp). It initializes a slice sp using setupPointers. The function iterates through the pointers in sp and updates them by following the indices in array`. It measures the execution time and returns it as a time.Duration instance. The main function first computes the running time for 1 lane, and then it reports the gains when using multiple lanes. Overall, this code generates a cycle of specified length, sets up pointers, and benchmarks the execution time for different numbers of lanes. The primary purpose seems to be exploring parallelization using multiple lanes. The runBench function simulates parallel execution by updating pointers concurrently. The speedup is calculated by comparing the execution time for different numbers of lanes. The larger the speedup, the more efficient the memory-level parallel execution. The general principle is that you can often improve the performance of a system that faces high latencies by breaking the data dependencies. Instead of putting all your data in a long chain, try to break to have no chain at all or, if you must have chains, use several smaller chains.

Superscalarity and data dependency

Most current processors are superscalar (as opposed to ‘scalar’), meaning that they can execute and retire several instructions per CPU cycles. That is, even if you have a single CPU core, there is much parallelism involved. Some processors can retire 8 instructions per cycle or more. Not all code routines benefit equally from superscalar execution. Several factors can limit your processors to few instructions per cycle. Having to wait on memory accesses is one such factor. Another common factor is data dependency: when the next instruction depends on the result of a preceding instruction… it may have to wait before it starts executing. To illustrate consider functions that compute successive differences between elements of an array (e.g., given 5,7,6, you might get the initial value 5 followed by 2 and -1), and the reverse operation which sums up all the differences to recover the original value. You may implement these functions as such:

func successiveDifferences(arr []int) {
    base := arr[0]
    for i := 1; i < len(arr); i++ {
        base, arr[i] = arr[i], arr[i]-base
    }
}

func prefixSum(arr []int) {
    for i := 1; i < len(arr); i++ {
        arr[i] = arr[i] + arr[i-1]
    }
}

Assuming that the compiler does not optimize these functions in a non-trivial manner (e.g., using SIMD instructions), we can reason relatively simply about the performance. For the successive differences, we need approximately one subtraction per element in the array. For the prefix sum, you need approximately one addition per element in the array. It looks quite similar at a glance. However, the data dependency is different. To compute the difference between any two values in the array, you do not need to have computed the preceding differences. However, the prefix sum, as we implemented it, requires us to have computed all preceding sums before the next can be computed. Let us write a small benchmarking program to test the performance difference:

package main

import (
    "fmt"
    "math/rand"
    "testing"
)

func successiveDifferences(arr []int) {
    base := arr[0]
    for i := 1; i < len(arr); i++ {
        base, arr[i] = arr[i], arr[i]-base
    }
}

func prefixSum(arr []int) {
    for i := 1; i < len(arr); i++ {
        arr[i] = arr[i] + arr[i-1]
    }
}

var array []int

func BenchmarkPrefixSum(b *testing.B) {
    for n := 0; n < b.N; n++ {
        prefixSum(array)
    }
}

func BenchmarkSuccessiveDifferences(b *testing.B) {
    for n := 0; n < b.N; n++ {
        successiveDifferences(array)
    }
}

func main() {
    array = make([]int, 100)
    for i := range array {
        array[i] = rand.Int()
    }
    res2 := testing.Benchmark(BenchmarkSuccessiveDifferences)
    fmt.Println("BenchmarkSuccessiveDifferences", res2)
    res1 := testing.Benchmark(BenchmarkPrefixSum)
    fmt.Println("BenchmarkPrefixSum", res1)

}

Your result will vary depending on your system. However, you should not be surprised if the prefix sum takes more time. On an Apple system, we go the following results:

BenchmarkSuccessiveDifferences 39742334         30.04 ns/op
BenchmarkPrefixSum  8307944            142.8 ns/op

The prefix sum can be several times slower, even though it appears at a glance that it should use a comparable number of instructions. In general, you cannot trust a hasty analysis. Just because two functions appear to do a similar amount of work, does not mean that they have the same performance. Several factors must be taken into account, including data dependencies.

Branch prediction

In part because the processors are multiscalar, they have been designed to execute speculatively: when facing a branch, the processor tries to guess the direction that will be taken, and it begins the computation optimistically. When the processor makes the correct prediction, it usually improves the performance, sometimes by a large amount. However, when the processor is unable to predict accurately the branch, branch prediction may become a net negative. Indeed, when the branch is mispredicted, the processor may have to restart the computation from the point where it made the wrong prediction, an expensive process that can waste several CPU cycles. To illustrate, let us first consider a function that copies the content of an slice into another slice of the same size:

func Copy(dest []uint, arr []uint) {
    if len(dest) < len(arr) {
        panic("dest is too small")
    }
    for i, v := range arr {
        dest[i] = v
    }
}

A more sophisticated function may copy only the odd elements:

unc CopyOdd(dest []uint, arr []uint) {
    if len(dest) < len(arr) {
        panic("dest is too small")
    }
    for i, v := range arr {
        if v&1 == 1 {
            dest[i] = v
        }
    }
}

We may try to copy an array that contains random integers (both odd and even), only odd integers, or only even integers. The following program illustrates:

package main

import (
    "fmt"
    "math/rand"
    "testing"
)

func Copy(dest []uint, arr []uint) {
    if len(dest) < len(arr) {
        panic("dest is too small")
    }
    for i, v := range arr {
        dest[i] = v
    }
}

func CopyOdd(dest []uint, arr []uint) {
    if len(dest) < len(arr) {
        panic("dest is too small")
    }
    for i, v := range arr {
        if v&1 == 1 {
            dest[i] = v
        }
    }
}

var array []uint
var dest []uint

func BenchmarkCopyOdd(b *testing.B) {
    for n := 0; n < b.N; n++ {
        CopyOdd(dest, array)
    }
}

func BenchmarkCopy(b *testing.B) {
    for n := 0; n < b.N; n++ {
        Copy(dest, array)
    }
}

func main() {
    array = make([]uint, 10000)
    dest = make([]uint, len(array))

    for i := range array {
        array[i] = uint(rand.Uint32())
    }
    res0 := testing.Benchmark(BenchmarkCopy)
    fmt.Println("BenchmarkCopy (random)", res0)
    res1 := testing.Benchmark(BenchmarkCopyOdd)
    fmt.Println("BenchmarkCopyOdd (random)", res1)
    for i := range array {
        array[i] = uint(rand.Uint32()) | 1
    }
    res2 := testing.Benchmark(BenchmarkCopyOdd)
    fmt.Println("BenchmarkCopyOdd (odd data)", res2)
    for i := range array {
        array[i] = uint(rand.Uint32()) &^ 1
    }
    res3 := testing.Benchmark(BenchmarkCopyOdd)
    fmt.Println("BenchmarkCopyOdd (even data)", res3)
}

On an Apple system, we got the following results:

BenchmarkCopy (random)   414158       2936 ns/op
BenchmarkCopyOdd (random)    55408           19518 ns/op
BenchmarkCopyOdd (odd data)   402670          2975 ns/op
BenchmarkCopyOdd (even data)   402738         2896 ns/op

The last three timings involve the same function, only the input data differs. We find that all timings are similar in this case, except for benchmark that copies random data: it is several times slower in our tests. The much longer running time is due to the presence of an unpredictable branch in our inner loop. Observe that the same function, subject to the same volume of data, can have vastly different performance characteristics, even though the computational complexity of the function does not change: in all instances, we have linear time complexity. If we expect our data to lead to poor branch prediction, we may reduce the number of branches in the code. The resulting code might be nearly branch free or branchless. For example, we can use an arithmetic and logical expression to replace a condition copy:

func CopyOddBranchless(dest []uint, arr []uint) {
    if len(dest) < len(arr) {
        panic("dest is too small")
    }
    for i, v := range arr {
        dest[i] ^= uint(-(v & 1)) & (v ^ dest[i])
    }
}

Let us review the complicated expression:

  • v & 1: This operation checks if the least significant bit of v is set (i.e., if v is odd).
  • -(v & 1): This negates the result of the previous operation. If v is odd, this becomes -1; otherwise, it becomes 0. However, -1 as an unsigned integer is becomes the maximal value, the one with all of the bits set to 1.
  • v ^ dest[i]: This XORs the value of v with the corresponding element in the dest slice.
  • uint(-(v & 1)) & (v ^ dest[i]): If v is odd, it returns the XOR of v with dest[i]; otherwise, it returns 0.
  • Finally, dest[i] ^= uint(-(v & 1)) & (v ^ dest[i]) leaves dest[i] unchanged if v is even, otherwise it replaces with v using the fact that dest[i] ^ (v ^ dest[i]) == v.

We can put this function to good use in a benchmark:

package main

import (
    "fmt"
    "math/rand"
    "testing"
)

func CopyOdd(dest []uint, arr []uint) {
    if len(dest) < len(arr) {
        panic("dest is too small")
    }
    for i, v := range arr {
        if v&1 == 1 {
            dest[i] = v
        }
    }
}

func CopyOddBranchless(dest []uint, arr []uint) {
    if len(dest) < len(arr) {
        panic("dest is too small")
    }
    for i, v := range arr {
        dest[i] ^= uint(-(v & 1)) & (v ^ dest[i])
    }
}

var array []uint
var dest []uint

func BenchmarkCopyOdd(b *testing.B) {
    for n := 0; n < b.N; n++ {
        CopyOdd(dest, array)
    }
}

func BenchmarkCopyOddBranchless(b *testing.B) {
    for n := 0; n < b.N; n++ {
        CopyOddBranchless(dest, array)
    }
}
func main() {
    array = make([]uint, 10000)
    dest = make([]uint, len(array))
    for i := range array {
        array[i] = uint(rand.Uint32())
    }
    res1 := testing.Benchmark(BenchmarkCopyOdd)
    fmt.Println("BenchmarkCopyOdd (random)", res1)
    res2 := testing.Benchmark(BenchmarkCopyOddBranchless)
    fmt.Println("BenchmarkCopyOddBranchless (random)", res2)
}

On an Apple system, we got:

BenchmarkCopyOdd (random)    60782           19254 ns/op
BenchmarkCopyOddBranchless (random)   166863          7124 ns/op

In this test, the branchless approach is much faster. We should stress that it is not always the case that branchless code is faster. In fact, we observe that in our overall test results, the branchless function is significantly slower than the original when the results are predictable (e.g., 2896 ns/op vs 7124 ns/op). In actual software, you should try to recognize where you have poorly predicted branches and act in these cases to see if a branchless approach might be faster. Thankfully, most branches are well predicted in practice in most projects.

s mazuk: Reblog to open a rail line from your blog to the person you reblogged this from

fearthefuzzybear:

amtrak-official:

Reblog to open a rail line from your blog to the person you reblogged this from

our beautiful rail line… (so far)

Planet Haskell: Oleg Grenrus: ST with an early exit

Posted on 2024-03-17 by Oleg Grenrus

Implementation

I wish there were an early exit functionality in the ST monad. This need comes time to time when writing imperative algorithms in Haskell.

It's very likely there is a functional version of an algorithm, but it might be that ST-version is just simply faster, e.g. by avoiding allocations (as allocating even short lived garbage is not free).

But there are no early exit in the ST monad.

Recent GHC added delimited continuations. The TL;DR is that delimited continuations is somewhat like goto:

  • newPromptTag# creates a label (tag)
  • prompt# brackets the computation
  • control# kind of jumps (goes to) the end of enclosing prompt bracket, and continues from there.

So let's use this functionality to implement a version of ST which has an early exit. It turns out to be quite simple.

The ST monad is define like:

newtype ST s a = ST (State# s -> (# State# s, a #)

and we change it by adding an additional prompt tag argument:

newtype EST e s a = EST
    { unEST :: forall r. PromptTag# (Either e r)
            -> State# s -> (# State# s, a #) 
    }

(Why forall r.? We'll see soon).

It's easy to lift normal ST computations into EST ones:

liftST :: ST s a -> EST e s a
liftST (ST f) = EST (\_ -> f)

so EST is a generalisation of ST, good.

Now we need a way to run EST computations, and also a way to early exit in them.

The early exit is the simpler one. Given that tag prompt brackets the whole computation, we simply jump to the end with Left e. We ignore the captured continuation, we have no use for it.

earlyExitEST :: e -> EST e s any
earlyExitEST e = EST (\tag -> control0## tag (\_k s -> (# s, Left e #)))

Now, the job for runEST is to create the tag and prompt the computation:

runEST :: forall e a. (forall s. EST e s a) -> Either e a
runEST (EST f) = runRW#
    -- create tag
    (\s0 -> case newPromptTag# s0 of {
    -- prompt
    (# s1, tag #) -> case prompt# tag
         -- run the `f` inside prompt,
         -- and once we get to the end return `Right` value
         (\s2 -> case f tag s2 of (# s3, a #) -> (# s3, Right a #)) s1 of {
    (# _, a #) -> a }})

runRW# and forgetting the state at the end is the same as in runST, for comparison:

runST :: (forall s. ST s a) -> a
runST (ST st_rep) = case runRW# st_rep of (# _, a #) -> a
-- See Note [runRW magic] in GHC.CoreToStg.Prep

With all the pieces in place, we can run few simple examples:

-- | >>> ex1
-- Left 'x'
ex1 :: Either Char Bool
ex1 = runEST $ earlyExitEST 'x'

-- | >>> ex2
-- Right True
ex2 :: Either Char Bool
ex2 = runEST (return True)

Comments & wrinkles

Early exit is one of the simplest "effect" you can implement with delimited continuations. This is the throwing part of the exceptions, with only top-level exception handler. It's a nice exercise (and a brain twister) to implement catch blocks.

One wrinkle in this implementation is the control0## (not control0#) function I used. The delimited continuations primops are made to work only with RealWorld, not arbitrary State# tokens.

I think this is unnecessary specialization GHC issue #24165, I was advice to simply use unsafeIOToST, so I did:

control0##
    :: PromptTag# a
    -> (((State# s -> (# State# s, b #)) -> State# s -> (# State# s, a #))
                                         -> State# s -> (# State# s, a #))
    -> State# s -> (# State# s, b #)
control0## = unsafeCoerce# control0#

This still feels silly, especially realizing that the (only) example in the delimited continuations proposal goes like

type role CC nominal representational
newtype CC ans a = CC (State# RealWorld -> (# State# RealWorld, a #))
  deriving (Functor, Applicative, Monad) via IO

runCC :: (forall ans. CC ans a) -> a
runCC (CC m) = case runRW# m of (# _, a #) -> a

but if you look at that, it's just a ST monad done weirdly:

newtype ST s a = ST (State# RealWorld -> (# State# RealWorld, a #))
-- not using `s` argument !?

There might be a good reason why CC should be done like that (other than than primops are RealWorld specific), but the proposal doesn't explain that difference. To me having phantom ans instead of using nominally it as in ST is suspicious.

Conclusion

Delimited continutations are fun and could be very useful.

But surprisingly, at the moment of writing I cannot find any package on Hackage using them for anything! Search for newPromptTag returns only false positives (ghc-lib etc) right now. I wonder why they are unused?

Please try them out!

Jesse Moynihan: Forming 376

Michael Geist: Better Laws, Not Bans: Why a TikTok Ban is a Bad Idea

New legislation making its way through the U.S. Congress has placed a TikTok ban back on the public agenda. The app is already prohibited on government devices in Canada, the government has quietly conducted a national security review, and there are new calls to ban it altogether from the Canadian market. While it might be tempting for some politicians to jump on the bandwagon, a ban would be a mistake. There are legitimate concerns with social media companies, but there simply hasn’t been convincing evidence that TikTok currently raises a national security threat nor that it poses a greater risk than any other social media service. The furor really seems to be a case of economic nationalism – a desire to deny a popular Chinese service access to the U.S. market – rather than a genuine case that TikTok poses a unique privacy and security threat. Taken at face value, however, the case against TikTok comes down to a simple concern: its owner, ByteDance, is a Chinese company that could theoretically be required to disclose user information to the Chinese government or compelled to act on its behalf. The proposed U.S. law therefore would require that TikTok be sold within six months or face a ban.

While the concerns associated with TikTok given its Chinese connection and popularity with younger demographics are well known, the privacy and security case against it is very weak. First, the risk of government mandated disclosures seems entirely theoretical at this stage. To date, the company says there have been no such requests and it has worked to create a firewall from its Chinese operation through Project Texas, which it says will ensure that U.S. data stays in the U.S. It is true that the Chinese government could require disclosures, but that is true of any government. Indeed, mandated governmental disclosures has been a concern for decades: think of the B.C. government outsourcing health data to the U.S. two decades ago, the Snowden revelations in 2013, or the myriad of U.S. laws that already mandate disclosures. Rather than a ban, the solutions include blocking statutes that prohibit disclosures, retention of data within jurisdictions, and transparency requirements that mandate notification of data disclosures.

Second, the privacy and disinformation concerns are by no means unique to TikTok. All social media sites are known platforms for government-backed disinformation campaigns and raise significant privacy issues. Banning a single app doesn’t solve the issue, it only means shifting those campaigns to other platforms. Disinformation is a problem whether on TikTok, Facebook, Twitter or any other social media service. If we are serious about addressing the issue, we need broadly applicable regulations and compliance measures.

Third, banning a single social media service only strengthens the competitors and consolidates their power. For example, India banned TikTok on a permanent basis in 2021. The result? Instagram, owned by Meta, became the country’s most popular app, providing a reminder of the unintended consequences of an app ban.

Fourth, a democratic government banning TikTok seem likely to create a model that will be emulated by others to restrict speech. TikTok has already been banned in Nepal for “disrupting social harmony”, Somalia due to explicit content, Indonesia for blasphemy, and Afghanistan to “prevent young persons from being misled.”  This is not a model that Canada or any other democratic country should be embracing.

Fifth, TikTok is an important platform for expression. While governments have occasionally pursued restrictions on foreign government-backed broadcasters (ie. Russia Today), TikTok is a user content site hosting expression from millions worldwide. There are real harms that occur on the platform, like any other. The answer to those problems lies in broadly applicable regulations – better privacy, competition, platform responsibility and accountability as well as measures to address deliberate misinformation. That notably includes the platform liability portion of the Online Harms Act. But it does not include – nor should not include – a ban based on a flimsy, largely evidence-free case.

The post Better Laws, Not Bans: Why a TikTok Ban is a Bad Idea appeared first on Michael Geist.

things magazine: Online horrorshow

Are We Watching The Internet Die? (via MeFi) / related, How Google is killing independent sites like ours / not related: the sounds of horror. The Mega Marvin and the Apprehension Engine (which subsequently inspired the Tension Engine), custom creations … Continue reading

Tea Masters: A la gloire du printemps

Je connais cette plantation de BiLuoChun à San Hsia depuis plus de 15 ans. C'est ma première escale dans mes voyages dans les régions productrices de thé de Taiwan. Et chaque année, il y a du changement! Il y a l'impact de la météo hivernale et aussi celui du travail du fermier. Ci-dessus, nous pouvons vois que plusieurs théiers ont été coupés relativement court et que quelques feuilles poussent déjà sur ces grosses branches. Et, tout autour de ces théiers coupés bas, on voit des branches mortes recouvrir le sol. Que s'est-il passé?
J'ai posé la question au fermier. Il m'a dit que quand les théiers sont trop grands, il devient difficile de les récolter. De plus, quand ils sont grands et qu'ils produisent donc beaucoup de feuilles, ces feuilles perdent en concentration, car chacune reçoit peu de nutriments des racines. C'est mathématique: si un théier produit 100 ou 1000 feuilles, la concentration des arômes dans les feuilles sera affectée. Couper les théiers permet donc d'augmenter la qualité, mais comme c'est une petite exploitation familiale, on remarque qu'il ne le fait pas avec tous les théiers en même temps. En effet, selon leur emplacement, certains théiers reçoivent plus ou moins de soleil et d'eau et se développent mieux que d'autres. Cela explique pourquoi ce fermier réalise ce genre d'opération de manière ciblée. Et, ce qu'on voit moins bien, c'est que même la hauteur de la coupe des arbustes n'est pas partout la même. Parfois il coupe plus bas, parfois plus haut. Je n'ai pas trop bien compris pourquoi, mais pour lui ce n'est pas du tout le fruit du hasard.

Quant à la présence de ces branches mortes tout autour des théiers, elles servent à
- protéger le sol de l'érosion (car il peut pleuvoir très fort),
- empêcher les mauvaises herbes de grandir avec les minéraux présents dans le sol,
- se transformer lentement en engrais naturel.
De ces photos, on peut aussi déduire que la récolte sur cette plantation se fait nécessairement à la main. En effet, pour des récoltes mécaniques, il faudrait que les théières soient entretenus de manière à ce que les théières aient la même hauteur.

En 15 ans, j'ai pu en voir bien des changements dans cette plantation! Le plus grand intervint il y a 5 ans environ, quand le grand-père passa la main à son fils et à sa belle-fille. Il fit alors le choix de cultiver de manière organique, le plus naturellement possible. Cela a pris quelques années, mais l'impact sur la qualité justifie pleinement le surcoût. La plupart des fermiers de San Hsia ont suivi une stratégie différente: des grandes productions qui nécessitent beaucoup d'engrais. 
Mais ce producteur aime son indépendance. Il reste petit et, grâce à notre long chemin ensemble, j'ai le privilège d'être le premier averti quand il a réalisé sa première production. Cette année, la primeur de ce BiLuoChun eut lieu le 5 mars. Seulement 3.6 kg! Et quelle finesse! 

Ce thé me fait penser à l'ambroisie dans L'Iliade d'Homère. Boisson des dieux, elle sustente les Grecs à la recherche de gloire dans leur expédition face à Troie. Le bourgeon de thé vert ressemble à une lance ou une épée! Et, comme de nombreux héros, ces feuilles sont arrachées à la vie dans leur prime jeunesse, avant même d'avoir atteint la maturité. C'est surtout vrai pour cette première récolte de l'année! C'est aussi celle dont les arômes sont les plus divins et, comme Hector ou Achille, ceux dont le souvenir est le plus glorieux et inoubliable! 

Planet Lisp: vindarel: Oh no, I started a Magit-like plugin for the Lem editor

Lem is an awesome project. It’s an editor buit in Common Lisp, ready to use out of the box for Common Lisp, that supports more languages and modes (Python, Rust, Elixir, Go, JavaScript, TypeScript, Haskell, Java, Nim, Dart, OCaml, Scala, Swift, shell, asm, but also markdown, ascii, JSON, HTML and CSS, SQL...) thanks to, in part, its built-in LSP support.

I took the challenge to add an interactive interface for Git, à la Magit, because you know, despite all its features (good vim mode, project-aware commands, grep, file tree view and directory mode, multiple cursors, tabs...), there’s so much an editor should do to be useful all day long.

Now, for a Git project (and to a lower extent, Fossil and Mercurial ones) we can see its status, stage changes, commit, push & pull, start an interactive rebase...

I like the shape it is taking, and frankly, what I have been able to assemble in a continuously successful hack session is a tribute to what @cxxxr and the early contributors built. Lem’s codebase is easily explorable (more so in Lem itself of course, think Emacs in steroïds with greater Common Lisp power), clear, and fun. Come to the Discord or watch the repository and see how new contributors easily add new features.

I didn’t even have to build an UI interface, fortunately. I started with the built-in interactive grep mode, and built from there.

Enough talk, what can we do with Lem/legit as of today? After that, we’ll discuss some implementation details.

Disclaimer: there’s room for collaboration ;)

Table of Contents

Lem/legit - manual

NOTE: you'd better read the latest manual on Lem's repository: https://github.com/lem-project/lem/blob/main/extensions/legit/README.md

legit’s main focus is to support Git operations, but it also has preliminary support for other VCSs (Fossil, Mercurial).

We can currently open a status window, stage and unstage files or diff hunks, commit our changes or again start an interactive rebase.

Its main source of inspiration is, obviously, Magit.

Status

legit is in development. It isn’t finished nor complete nor at feature parity with Magit nor suitable for mission-critical work. Use at your own risk.

However it should run a few operations smoothly.

Load

legit is built into Lem but it isn’t loaded by default. To load it, open a Lisp REPL (M-x start-lisp-repl) or evaluate Lisp code (M-:) and type:

(ql:quickload "lem/legit")

Now you can start it with C-x g or M-x legit-status.

Help

Press ? or C-x ? to call legit-help.

M-x legit-status

The status windows show us, on the left:

  • the current branch
  • the untracked files.
  • the unstaged changes, staged changes,
  • the latest commits.

It also warns us if a rebase is in process.

and the window on the right shows us the file diffs or the commits’ content.

Refresh the status content with g.

We can navigate inside legit windows with n, p, M-n and M-p (go to next/previous section).

To change windows, use the usual M-o key from Lem.

Quit with q or C-x 0 (zero).

Stage or unstage files, diff hunks (s, u)

Stage changes with “s”.

When your cursor is on an Unstaged change file, you can see the file changes on the right, and you can stage the whole file with s.

You can also go to the diff window on the right, navigate the diff hunks with n and p and stage a hunk with s.

Unstage a change with u.

Discard changes to a file

Use k. Be careful, you can loose your changes.

Commit

Pressing c opens a new buffer where you can write your commit message.

Validate with C-c C-c and quit with M-q (or C-c C-k).

Branches, push, pull

Checkout a branch with b b (“b” followed by another “b”).

Create a new branch with b c.

You can push to the current remote branch with P p and pull changes (fetch) with F p.

NOTE: after pressing "P" or "F", you will not see an intermediate window giving you choices. Just press "P p" one after the other.

Interactive rebase

You can start a Git interactive rebase. Place the cursor on a commit you want to rebase from, and press r i.

You will be dropped into the classical Git rebase file, that presents you commits and an action to apply on them: pick the commit, drop it, fixup, edit, reword, squash...

For example:

pick 26b3990f the following commit
pick 499ba39d some commit

# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup <commit> = like "squash", but discard this commit's log message
# x, exec <command> = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop <commit> = remove commit
# l, label <label> = label current HEAD with a name
# t, reset <label> = reset HEAD to a label
# m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
# .       create a merge commit using the original merge commit's
# .       message (or the oneline, if no original merge commit was
# .       specified). Use -c <commit> to reword the commit message.
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

legit binds keys to the rebase actions:

  • use p to “pick” the commit (the default)
  • f to fixup

and so on.

Validate anytime with C-c C-c and abort with C-c C-k.

NOTE: at the time of writing, "reword" and "edit" are not supported.
NOTE: the interactive rebase is currently Unix only. This is due to the short shell script we use to control the Git process. Come join us if you know how to "trap some-fn SIGTERM" for Windows plateforms.

Abort, continue, skip

In any legit window, type r a to abort a rebase process (if it was started by you inside Lem or by another process), r c to call git rebase --continue and r s to call git rebase --skip.

Fossil

We have basic Fossil support: see current branch, add change, commit.

Mercurial

We have basic Mercurial support.

Customization

In the lem/porcelain package:

  • *git-base-arglist*: the Git program, to be appended command-line options. Defaults to (list "git").

=> you can change the default call to the git binary.

Same with *fossil-base-args* and *hg-base-arglist* (oops, a name mismatch).

  • *nb-latest-commits*: defaults to 10
  • *branch-sort-by*: when listing branches, sort by this field name. Prefix with “-” to sort in descending order. Defaults to “-creatordate”, to list the latest used branches first.
  • *file-diff-args*: defaults to (list "diff" "--no-color"). Arguments to display the file diffs. Will be surrounded by the git binary and the file path. For staged files, –cached is added by the command.

If a project is managed by more than one VCS, legit takes the first VCS defined in *vcs-existence-order*:

(defvar *vcs-existence-order*
  '(
    git-project-p
    fossil-project-p
    hg-project-p
    ))

where these symbols are functions with no arguments that return two values: a truthy value if the current project is considered a Git/Fossil/Mercurial project, and a keyword representing the VCS: :git, :fossil, :hg.

For example:

(defun hg-project-p ()
  "Return t if we find a .hg/ directory in the current directory (which should be the project root. Use `lem/legit::with-current-project`)."
  (values (uiop:directory-exists-p ".hg")
          :hg))

Variables and parameters in the lem/legit package. They might not be exported.

  • *legit-verbose*: If non nil, print some logs on standard output (terminal) and create the hunk patch file on disk at (lem-home)/lem-hunk-latest.patch.

=> to help debugging

see sources in /extensions/legit/

Implementation details

Calls

Repository data is retrieved with calls to the VCS binary. We have a POC to read some data directly from the Git objects (proactively looking for best efficiency) using cl-git.

Basically, we get Git status data with git status --porcelain=v1. This outputs something like:

 A project/settings.lisp
 M project/api.lisp
?? project/search/datasources

we output this a to a string and we parse it.

Interactive rebase

The interactive rebase currently uses a Unix-only shell script.

When you run git rebase --interactive, the Git program creates a special file in .git/rebase-merge/git-rebase-merge-todo, opens it with your $EDITOR in the terminal, lets you edit it (change a “pick” to “fixup”, “reword” etc), and on exit it interprets the file and runs the required Git operations. What we want is to not use Git’s default program, edit the file with Lem and our special Legit mode that binds keys for quick actions (press “f” for “fixup” etc). So we bind the shell’s $EDITOR to a dummy editor, this shell script:

function ok {
    exit 0
}

trap ok SIGTERM
echo "dumbrebaseeditor_pid:$$"

while :
do
        sleep 0.1
done

This script doesn’t simulate an editor, it waits, so we can edit the rebase file with Lem, but this script catches a SIGTERM signal and exits successfully, so git-rebase is happy and terminates the rebase and all is well.

But that’s Unix only.

On that matter Magit seems to be doing black magic.

The basic function to write content to a buffer is

(insert-string point s :read-only t)

And this is how you make actionable links:

(put-text-property start end :visit-file-function function))

where :visit-file-function is any keyword you want, and function is any lambda function you want. So, how to make any link useful? Create a lambda, make it close over any variables you want, “store” it in a link, and later on read the attribute at point with

(text-property-at point :visit-file-function)

where point can be (buffer-point (window-buffer *my-window*)) for instance.

Now create a mode, add keybindings and you’re ready to go.

;; Legit diff mode: view diffs.
;; We use the existing patch-mode and supercharge it with our keys.
(define-major-mode legit-diff-mode lem-patch-mode:patch-mode
    (:name "legit-diff"
     :syntax-table lem-patch-mode::*patch-syntax-table*
     :keymap *legit-diff-mode-keymap*)
  (setf (variable-value 'enable-syntax-highlight) t))

;; git commands.
;; Some are defined on peek-legit too.
(define-key *global-keymap* "C-x g" 'legit-status)

or a minor mode:

(define-minor-mode peek-legit-mode
    (:name "Peek"
     :keymap *peek-legit-keymap*)
  (setf (not-switchable-buffer-p (current-buffer)) t))

;; Git commands
;; Some are defined on legit.lisp for this keymap too.
(define-key *peek-legit-keymap* "s" 'peek-legit-stage-file)
(define-key *peek-legit-keymap* "u" 'peek-legit-unstage-file)
(define-key *peek-legit-keymap* "k" 'peek-legit-discard-file)

TODOs

Much needs to be done, if only to have a better discoverable UX.

First:

  • interactive rebase: support reword, edit.
  • show renamed files

and then:

  • visual submenu to pick subcommands
  • view log
  • stage only selected region (more precise than hunks)
  • unstage/stage/discard multiple files
  • many, many more commands, settings and switches
  • mouse context menus

Closing words

You’ll be surprised by all Lem’s features and how easy it is to add features.

I believe it doesn’t make much sense to “port Magit to Lem”. The UIs are different, the text displaying mechanism is different, etc. It’s faster to re-implement the required functionality, without the cruft. And look, I started, it’s possible.

But, sad me, I didn’t plan to be involved in yet another side project, as cool and motivating as it might be :S


Michael Geist: Government Gaslighting Again?: Unpacking the Uncomfortable Reality of the Online Harms Act

The Online Harms Act was only introduced two weeks ago, but it already appears the government is ready to run back the same playbook of gaslighting and denials that plagued Bills C-11 and C-18. Those bills, which addressed Internet streaming and news, faced widespread criticism over potential regulation of user content and the prospect of blocked news links on major Internet platforms. Rather than engage in a policy process that took the criticism seriously, the government ignored digital creators (including disrespecting indigenous creators) and dismissed the risks of Bill C-18 as a bluff. The results of that strategy are well-known: Bill C-11 required a policy direction fix and is mired in a years-long regulatory process at the CRTC and news links have been blocked for months on Meta as the list of Canadian media bankruptcies and closures mount.

Bill C-63, the Online Harms Act, offered the chance for a fresh start given that the government seemed to accept the sharp criticism of its first proposal, engaging in a more open consultative process in response. As I noted when the bill was first tabled, the core of the legislation addressing the responsibility of Internet platforms was indeed much improved. Yet it was immediately obvious there were red flags, particularly with respect to the Digital Safety Commission charged with enforcing the law and with the inclusion of Criminal Code and Human Rights Act provisions with overbroad penalties and the potential to weaponize speech complaints. The hope – based on the more collaborative approach used to develop the law – was that there would be a “genuine welcoming of constructive criticism rather than the discouraging, hostile processes of recent years.” Two weeks in that hope is rapidly disappearing.

The government’s shift in approach has come as the criticism has increased. From former Chief Justice of the Supreme Court of Canada Beverly McLachlin (“I’m virtually certain that many of these provisions will be challenged if they stay in their present form”) to Margaret Atwood (“The possibilities for revenge false accusations + thoughtcrime stuff are sooo inviting”), the government seemed caught off guard by the harsh response to its bill. After a second briefing failed to quell the concerns, the Minister and officials in the PMO have gone back to the gaslighting playbook by dismissing the criticism as clickbait, suggesting they involve a misunderstanding of the law.

There are plenty of reliable sources on Bill C-63 (my Law Bytes podcast this week features Vivek Krishnamurthy, who was on the government’s expert panel on online harms, and I participated in another podcast with Senator Pamela Wallin) and the emerging consensus is that there are legitimate, serious concerns with the bill. These include:

  • The poorly conceived Digital Safety Commission lacks even basic rules of evidence, can conduct secret hearings, and has been granted an astonishing array of powers with limited oversight. This isn’t a fabrication. For example, Section 87 of the bill literally says “the Commission is not bound by any legal or technical rules of evidence.”
  • The Criminal Code provisions are indefensible: they really do include penalties that run as high as life in prison for committing a crime if motivated by hatred (Section 320.‍1001 on Offence Motivated By Hatred) and feature rules that introduce peace bonds for the possibility of a future hate offence with requirements to wear a monitoring device among the available conditions (Section 810.012 on Fear of Hate Propaganda Offence or Hate Crime). 
  • The Human Rights Act changes absolutely open the door to the weaponization of complaints for communication of hate speech online that “is likely to foment detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination” (Section 13.1). The penalties are indeed up to $20,000 for the complainant and up to $50,000 to the government (Section 53.1).

This is the plain text of bill. The Spectator article that the Minister suggests is clickbait may overstate some aspects of Bill C-63, but the core elements are accurate. Those supporters of the bill that are clinging to the Internet platform regulation provisions would do well to keep scrolling through the full text. The most obvious solution is to cut out the Criminal Code and Human Rights Act provisions, which have nothing to do with establishing Internet platform liability for online harms. Instead, the government seems ready yet again to gaslight its critics and claim that they have it all wrong. But the text of the law is unmistakable and the initial refusal to address the concerns is a mistake that, if it persists, risks sinking the entire bill.

The post Government Gaslighting Again?: Unpacking the Uncomfortable Reality of the Online Harms Act appeared first on Michael Geist.

Daniel Lemire's blog: How to read files quickly in JavaScript

Suppose you need to read several files on a server using JavaScript. There are many ways to read files in JavaScript with a runtime like Node.js. Which one is best? Let us consider the various approaches.

Using fs.promises

const fs = require('fs/promises');
const readFile = fs.readFile;
readFile("lipsum.txt", { encoding: 'utf-8' })
.then((data) => {...})
.catch((err) => {...})

Using fs.readFile and util.promisify

const fs = require('fs');
const util = require('util');
const readFile = util.promisify(fs.readFile);
readFile("lipsum.txt", { encoding: 'utf-8' })
.then((data) => {...})
.catch((err) => {...})

Using fs.readFileSync

const fs = require('fs');
const readFileSync = fs.readFileSync;
var data = readFileSync("lipsum.txt", { encoding: 'utf-8' })

Using await fs.readFileSync

const fs = require('fs');
const readFileSync = fs.readFileSync;
async function f(name, options) {
  return readFileSync(name, options);
}

Using fs.readFile

const fs = require('fs');
const readFile = fs.readFile;
fs.readFile('lipsum.txt', function read(err, data) {...});

Benchmark

I wrote a small benchmark where I repeated read a file from disk. It is a simple loop where the same file is accessed each time. I report the number of milliseconds needed to read the file 50,000 times.  The file is relatively small (slightly over a kilobyte). I use a large server with dozens of Ice Lake Intel cores and plenty of memory. I use Node.js 20.1 and Bun 1.0.14. Bun is a competing JavaScript runtime.

I ran the benchmarks multiple times, and I report the best results in all cases. Your results will differ.

time (Node.js) time (Bun)
fs.promises 2400 ms 110 ms
fs.readFile and util.promisify 1500 ms 180 ms
fs.readFileSync 140 ms 140 ms
await fs.readFileSync 220 ms 180 ms
fs.readFile 760 ms 90 ms

At least on my system, in this test, the fs.promises is significantly more expensive than anything else when using Node.js. The Bun runtime is much faster than Node.js in this test.

The results are worse than they appear for fs.promises in the following sense. I find that readFileSync uses 300 ms of CPU time whereas fs.promises uses 7 s of CPU time. That is because fs.promises triggers work on several cores during the benchmark.

Increasingly the file size to, say, 32kB, does not change the conclusion. If you go to significantly larger files, many of the Node.js cases fail with “heap limit Allocation failed”. Bun keeps going even with large files. The test results do not change the conclusion with Bun: fs.readFile is consistently faster in my tests, even for large files.

Credit. My benchmark is inspired by a test case provided by Evgenii Stulnikov.

CreativeApplications.Net: Hello from the Global Creative Laboratories! Vol. 2: Cultural Facilities Responding to the Times

On December 23, 2023, “Hello from the Global Creative Laboratories! Vol. 2: Cultural Facilities Responding to the Times” was held at Civic Creative Base Tokyo (CCBT), a hub for exploring creativity through art, technology, and design.

Submitted by: CCBTCivicCreativeBaseTokyo
Category: Member Submissions
Tags: / / / / / / / / / / / / / / / / / / / / / / / / / /
People:

CreativeApplications.Net (CAN) is a community supported website. If you enjoy content on CAN, please support us by Becoming a Member. Thank you!

OCaml Weekly News: OCaml Weekly News, 12 Mar 2024

  1. Js_of_ocaml 5.7
  2. Bindings to QuickJS
  3. Ocaml-windows 5.1.1
  4. First release candidate for OCaml 4.14.2
  5. OCaml.org Newsletter: February 2024
  6. Announcing the New Dark Mode on OCaml.org
  7. Call for presentations – ML 2024: ACM SIGPLAN ML Family Workshop
  8. dream-html 3.0.0
  9. ppx_minidebug 1.3.0: toward a logging framework
  10. Other OCaml News

The Shape of Code: What is known about software effort estimation in 2024

It’s three years since my 2021 post summarizing what I knew about estimating software tasks. While no major new public datasets have appeared (there have been smaller finds), I have talked to lots of developers/managers about the findings from the 2019/2021 data avalanche, and some data dots have been connected.

A common response from managers, when I outline the patterns found, is some variation of: “That sounds about right.” While it’s great to have this confirmation, it’s disappointing to be telling people what they already know, even if I can put numbers to the patterns.

Some of the developer behavior patterns look, to me, to be actionable, e.g., send developers on a course to unbias their estimates. In practice, managers are worried about upsetting developers or destabilising teams. It’s easy for an unhappy developer to find another job (the speakers at the meetups I attend often end by saying: “and we’re hiring.”)

This post summarizes a talk I gave recently on what is known about software estimating; a video will eventually appear on the British Computer Society‘s Software Practice Advancement group’s YouTube channel, and the slides are on Github.

What I call the historical estimation models contain source code, measured in lines, as a substantial component, e.g., COCOMO which overfits a miniscule dataset. The problem with this approach is that estimates of the LOC needed to implement some functionality LOC are very inaccurate, and different developers use different LOC to implement the same functionality.

Most academic research in software effort estimation continues to be based on miniscule datasets; it’s essentially fake research. Who is doing good research in software estimating? One person: Magne Jørgensen.

Almost all the short internal task estimate/actual datasets contain all the following patterns:

I have a new ChatGPT generated image for my slide covering the #Noestimates movement:

ChatGPT image promoting fun at work.

Trivium: 10mar2024

Jesse Moynihan: Forming 375

MattCha's Blog: 2011 Jin Dayi: Top Modern Dayi!


This is the second part (first part was the 2017) of the 2017/2011 Jin Dayi comparison sample set from Liquid Proust that goes for approx $57.00…. This 2011 Jin (Gold) Dayi is quite famous and expensive but it really blew my mind.  They still have a few of these sets left and I recommend it to try this one…

 Dry leaves are a faint plum smoke hay.

Rinsed leaf has a subtle smoke and tobacco and faint banana odour.

First infusion is smooth oily lubricating mouthwatering feeling with a subtle smoke and creamy custard banana nuance.  Creamy oily mouthfeel.  Taste stays in the saliva. Spacy relaxing Qi can feel it low in the abdomen.

Second has a sweet creamy oily caramel toffee taste with a very nice oily creamy texture.  Some woody and smoke in the background and long sweet returning minutes later there are candy pops of taste.  Long sweetness that evolves from caramel to icing sugar to plum to banana to candy long inbedded into saliva with mouthwatering deliciousness.  Nice lower abdomen and mid abdominal mild warming with spacy qi feeling.

Third infusion has a very sweet caramel oily full sweetness with resin woody slight background smoke and long evolving sweet taste with lots of oily salivating and nice mouthwatering.  Peaceful calm Qi distills the mind.  Low abdominal feeling with face and jaw light and some cjhezt beats and spacy feeling.  Candy finish minutes later trapped within the saliva. Its got the whole package here!  Cooled down it has a resin woody creamy fresh pungent creamy oily sweet taste. Long evolving sweet taste and long candy finish in chalky full mouthfeel.



Fourth has a creamy woody oily onset with resin woody taste and some faint pungent faint almost camphor taste.  Oily texture and mouthfeel is a bit less here but still full faint candy on the breath minutes later.  Chest abdominal feeling with light face and jaw and calm peace still feeling but also energetic with heart beats.

Fifth infusion is left to cool it is quite resinous and sweet woody incense.  Long sweet taste the mouthfeel and texture is fading infusion-to-infusion now. Still nice sweet taste and faint candy minutes later. Abdominal Qi face and jaw light. Peaceful pause.

Sixth infusion is left to cool and gives off a caramel resin sweet woody incense onset some fresher fruits taste with nice peaceful qi.  Less long sweetness now.  Abdominal feelings with light face and jaw.

Seventh infusion has a resin woody onset with some more distant sweetness chalky silty mouthfeeling with some stimulating feeling at the back of the tongue and throat.  Nice peaceful Qi with some chest beats both energizing and peaceful.  Face and abdomen sensations.  Not much returning sweet but done creamy silty oily sweetness with resin taste in the finish.

Eighth is woody resin with a sandy mouthfeeling back of tongue and throat gentle stimulation.  Not much sweet taste left but more resin woody.  Nice peaceful Qi feeling. Watery sensation smokey background.

9th is a long mug infusion and is resinous woody with a creamy sweet and silty oily texture. Strong heart beats. Slight bitter with full gripping mouth but still oily with salivating.

The overnight infusion of spent leaves gives off a creamy oily candy sweet taste with lots of salivating and mouthwatering.  



It’s got delicious aroma and long complex evolving layered taste complexity in the mouthfeel with some gripping that causes salivating and thick oily texture.  Qi imparts a strong effect on the mind and in some steeps both energizing and with distilling peacefull feeling.  It also has great bodyfeeling strong Heart beasts but also comforting face and abdominal sensations…

Top 5 of the modern Dayi that I have tried- excellent!!!!

Comparison: The 2017 resembles the 2011 enough to know that they are similar blends especially in their abdominal sensations bodyfeeling and lingering sweetness oily texture but will never have the greatness of the 2011- it doesn’t have the long evolving sweet layer taste, its mouthfeel is narrow with a tendency to dryness, its qi is not as complex.  Still pretty enjoyable though.

Top is the 2017 Jin Dayi bottom 2011 Jin Dayi.

Paul’s (Two Dog Tea Blog) Tasting Notes

Hobbes (The Half-Dipper) Tastimg Notes

Peace

MattCha's Blog: 2017 “Jin Dayi”

This sample comes from a comparison testing sample set from Liquid Proust which goes for approx $57.00 for the 2017/2011 Dayi Jin Sample set.  I sampled the quite amazing 2011 the following day in the next post.

 Dry leaves have a very tobacco wet smoke tobbacco odour.  

Rinsed leaf is superisingly sweet amongst strong wet dark cherry tobacco notes.

First infusion has a watery Smokey metallic mild chalky sweetness.  Nice sweet returning taste almost melon that builds in the aftertaste with some salivating.  Feel it in the chest. A bit of acidity left behind in the mouth.

Second infusion has smoke and tobacco with some bitterness over a dry roof mouthfeel and upper throat opening.  A sweet pop that has nice return. You van feel the bitterness buildup in the stomach- haven’t tried Dayi this fresh in a long long while. Strong chest feel with beats and energy rising.

Third infusion has a woody smoke oily texture with a creamy sweet oily creamy finish.  Smooth mouthfeel with lots of oily feeling and slight dry roof of mouth.  Nice sweet finish fades to slight dry smoke wood.



Fourth infusion is left to cool and gives off sweet oily taste with woody smoke finish.  The sweetness sort of fades into the smoke.  There is a mouth salivating with return sweetness which is almost fruity.  Strong deep uplift with chesty feeling not much bitter.  Fruity taste faintly lingering in saliva.  Spacey feeling.



Fifth infusion has a fruity faint smoke oily woody taste. Tobacco leaf base.  Mainly this sweet oily almost fruity that fades into a dryer tongue and roof stimulating effect.  Lingering fruity in mouth.  Spacy feeling

6th has some faint caramel, metalic and floral edges to it and has that oily smoke onset and fading sweetness.  There is a bit of bitterness that you can feel in the stomach.  Spacy qi feeling with feeling in lower abdomen.

7th is left to cool has a Smokey sweet hay almost fruity taste with tobacco leaf base.  

8th has a sweet oily hay tobacco leaf with a returning almost floral sweet almost fruit taste.  Thee is a chalky faint fruit returning with some mild mouthwatering.  Spacy Qi.  Fruity taste lingers in the saliva minutes later.

9th has a chalky talc woody slight bitter initial taste that has a cereal hay sweet taste with some mild mouth drying on roof and gums and tongue.

10th has a woody smoke ashy taste not that sweet anymore.  Spacy Qi.

11th is left in the cup overnight and gives off a sweet oily slight smokey wood taste with sweet vegital woody finish.  Lingering sweetness left in saliva.

Long mug steeping is a dry grippng mouthfeel with bitter wood and smoke taste.



The overnight mug steeping of spent leaves is a sweet woody almost fruit with a bitter almost but not really milk chocolate taste.  Silty mouthfeel with some chesty Qi and moderate energetic feel.

Peace

 

Daniel Lemire's blog: How many political parties rule Canada? Fun with statistics

Canada has several political parties with elected member of parliament: the Liberals, the Conservatives, the Bloc Québecois, de NDP and the Green. But do they behave as distinct political parties when voting, or are they somehow aligned?

Voting data for the member of parliament in Canada is easily accessible as JSON or XML. Thus I wrote a little Python script to compute, for each vote, what percentage of each party voted yea. I use the latest 394 votes. It turns out that, overwhelming, the percentage is either 0% or 100%. So individual members of parliament are not relevant, only caucuses matter.

We can first examine Pearson’s correlation between how the different parties vote:

Conserv. Liberal NDP Bloc Québécois Green Party
Conserv. 1 -0.5 -0.5 -0.1 -0.2
Liberal 1 0.8 0.4 0.5
NDP 1 0.4 0.6
Bloc Québécois 1 0.5
Green Party 1

We observe that there is excellent correlation between the ruling party (Liberal) and the NDP, and to a lesser extend to the Bloc Québécois (0.4) and Green (0.5). The Conservatives are anti-correlated with everyone else, although they less anti-correlated with the Bloc Québécois and the Green than with other parties (Liberal and NDP).

Though there are dozens of votes, you can capture 85% of the variance by using only two dimensions with a principal component analysis. In effect, you create two fictional voting events (that are weighted combinations of the votes) that represent most accurately the stances of the various parties.

The result demonstrates that four of the Canadian political parties are clustered, meaning that they vote similarly, while one party (the Conservatives) is clearly distinct in its voting patterns.

My source code is available. It is made of two simple Python files that you can run yourself. I encourage you to run your own analysis. My work can be extended to include more data.

CreativeApplications.Net: Livegrid: LED Matrices Made Easy

Livegrid is a harmonious blend of technology and art that brings environmental awareness right into your living space – now looking for support on Kickstarter.

Submitted by: drvkmr
Category: Member Submissions / Objects
Tags: / / / / / / / / / /
People:

CreativeApplications.Net (CAN) is a community supported website. If you enjoy content on CAN, please support us by Becoming a Member. Thank you!

The Universe of Discourse: Werewolf ammunition

This week I read on Tumblr somewhere this intriguing observation:

how come whenever someone gets a silver bullet to kill a werewolf or whatever the shell is silver too. Do they know that part gets ejected or is it some kind of scam

Quite so! Unless you're hunting werewolves with a muzzle-loaded rifle or a blunderbuss or something like that. Which sounds like a very bad idea.

Once you have the silver bullets, presumably you would then make them into cartidge ammunition using a standard ammunition press. And I'd think you would use standard brass casings. Silver would be expensive and pointless, and where would you get them? The silver bullets themselves are much easier. You can make them with an ordinary bullet mold, also available at Wal-Mart.

Anyway it seems to me that a much better approach, if you had enough silver, would be to use a shotgun and manufacture your own shotgun shells with silver shot. When you're attacked by a werewolf you don't want to be fussing around trying to aim for the head. You'd need more silver, but not too much more.

I think people who make their own shotgun shells usually buy their shot in bags instead of making it themselves. A while back I mentioned a low-tech way of making shot:

But why build a tower? … You melt up a cauldron of lead at the top, then dump it through a copper sieve and let it fall into a tub of water at the bottom. On the way down, the molten lead turns into round shot.

That's for 18th-century round bullets or maybe small cannonballs. For shotgun shot it seems very feasible. You wouldn't need a tower, you could do it in your garage. (Pause while I do some Internet research…) It seems the current technique is a little different: you let the molten lead drip through a die with a small hole.

Wikipedia has an article on silver bullets but no mention of silver shotgun pellets.

Addendum

I googled the original Tumblr post and found that it goes on very amusingly:

catch me in the woods the next morning with a metal detector gathering up casings to melt down and sell to more dumb fuck city shits next month

Tea Masters: Interview with Stéphane Erler of Tea Masters

The Universe of Discourse: Optimal boxes with and without lids

Sometime around 1986 or so I considered the question of the dimensions that a closed cuboidal box must have to enclose a given volume but use as little material as possible. (That is, if its surface area should be minimized.) It is an elementary calculus exercise and it is unsurprising that the optimal shape is a cube.

Then I wondered: what if the box is open at the top, so that it has only five faces instead of six? What are the optimal dimensions then?

I did the calculus, and it turned out that the optimal lidless box has a square base like the cube, but it should be exactly half as tall.

For example the optimal box-with-lid enclosing a cubic meter is a 1×1×1 cube with a surface area of .

Obviously if you just cut off the lid of the cubical box and throw it away you have a one-cubic-meter lidless box with a surface area of . But the optimal box-without-lid enclosing a cubic meter is shorter, with a larger base. It has dimensions $$2^{1/3} \cdot 2^{1/3} \cdot \frac{2^{1/3}}2$$

and a total surface area of only . It is what you would get if you took an optimal complete box, a cube, that enclosed two cubic meters, cut it in half, and threw the top half away.

I found it striking that the optimal lidless box was the same proportions as the optimal complete box, except half as tall. I asked Joe Keane if he could think of any reason why that should be obviously true, without requiring any calculus or computation. “Yes,” he said. I left it at that, imagining that at some point I would consider it at greater length and find the quick argument myself.

Then I forgot about it for a while.

Last week I remembered again and decided it was time to consider it at greater length and find the quick argument myself. Here's the explanation.

Take the cube and saw it into two equal halves. Each of these is a lidless five-sided box like the one we are trying to construct. The original cube enclosed a certain volume with the minimum possible material. The two half-cubes each enclose half the volume with half the material.

If there were a way to do better than that, you would be able to make a lidless box enclose half the volume with less than half the material. Then you could take two of those and glue them back together to get a complete box that enclosed the original volume with less than the original amount of material. But we already knew that the cube was optimal, so that is impossible.


churchturing.org / 2024-03-28T12:52:27