Read more of this story at Slashdot.
Imagine that you’re coordinating a large scale search-and-rescue mission in a cave. You need to know where all your groups are, and whether or not they’ve found anything. But how do they all communicate to the command center?
You’d guess radio, but you’d guess wrong. Radio doesn’t propagate well at all in a maze of twisty passages, all alike; rocks absorb radio waves, especially in the VHF/UHF range that’s best suited for most small radios. In the past, you’d run wire and transmit along it. This article runs through the options in detail. But adding miles of wire to your already heavy caving and climbing gear is a nuisance or worse.
Some experiments by groups of amateur radio operators, and cavers, with APRS repeaters aim to change that. Digipeaters, as they’re known in the APRS world, take an incoming message and forward it on again. On each successive hop, the station that received the signal appends its name to a list of paths that are sent along with the message, which assures that the message propagates but doesn’t get repeated around forever in a loop.
Digipeaters and battery packs are dropped, in Hänsel and Gretel fashion, as the cavers work their way through the cave. The trick is to make sure to place one repeater before you’ve entirely lost the radio signal from the previous one, of course. But the APRS Cave-Link project got one mile’s worth of transmission in Mammoth Cave without using wires at all. That’s not bad!
Now, GPS still doesn’t work underground, so the cavers need to bring an accurate map along with them and keep track of their own location. But even getting important messages (“we found him!”) passed around inside a cave environment is enough of a challenge.
We’ve seen APRS used for tracking high-altitude balloon payloads, and we can’t help but wonder if the same attention to weight-saving that’s demonstrated in these DIY versions wouldn’t also be useful in a caving context.
Have you made any cool networks of APRS links? Under adverse conditions? Let us know in the comments.
And thanks [Travis Goodspeed] for the unintentional tip.
Read more of this story at Slashdot.
Editor's note: To give you a chance to get to know our writers better, we've asked them to respond to some questions. In coming weeks, we'll be posting their responses, which will always be available as a link from their contributor biography page. Here's Nick Allen.
Where did you grow up, and what was it like?
I grew up in North Reading, MA. A small town in Massachusetts with a stern historical society, four Dunkin' Donuts stores in one mile, and (at the time) a Red Sox-losing complex. The number of Dunkin' Donuts in that strip has since been reduced to two.
Was anyone else in your family into movies? If so, what effect did they have on your moviegoing tastes?
My family is a movie family, but I would say something like going to the movies with my parents was pretty influential in just seeing movies as an event to be discussed, with different tastes. I liked talking about what we liked and didn't like about the movie whenever we got dinner afterward. And they got me into James Bond movies, too.
What's the first movie you remember seeing, and what impression did it make on you?
That might have been "Aladdin," but I don't remember any lasting impression.
What's the first movie that made you think, "Hey, some people made this. It didn't just exist. There's a human personality behind it."
That would probably have to be "Pulp Fiction," referred to me by a friend who was writing scripts. As I'm sure is the case for many people, this movie's huge style and sense of cool made it a gripping, electric experience.
What's the first movie you ever walked out of?
"Without a Paddle," which was only because of a bad friend hang-out/even worse date(?) I have never have gone back to it since then, but also have never walked out of a movie for fun or at any festivals, etc.
What's the funniest film you've ever seen?
What's the saddest film you've ever seen?
What's the scariest film you've ever seen?
What's the most romantic film you've ever seen?
"Manhattan," if for the intoxicating presentation of a city that made me so badly want to get out of the suburbs and under elevated subways and between skyscrapers after high school. The romance of living in a city can be as potent as any intimate relationship.
What's the first television show you ever saw that made you think television could be more than entertainment?
"The Charlie Brown Christmas Special."
What book do you think about or revisit the most?
"Sabbath's Theater," by Philip Roth.
What album or recording artist have you listened to the most, and why?
Beach Boys' "Pet Sounds," which happens to be the best album ever made.
Is there a movie that you think is great, or powerful, or perfect, but that you never especially want to see again, and why?
"Henry: Portrait of a Serial Killer." Admirably made by a mad man (John McNaughton), but clear enough to be understood in one viewing, and ruthless enough with its content it doesn't need to be seen again.
What movie have you seen more times than any other?
"National Lampoon's Christmas Vacation," of all things. It has been a holiday tradition in my house for eons, and I don't think I've been able to watch my favorite movies as much as I've seen this one.
What was your first R-rated movie, and did you like it?
This was very likely "Die Hard," and, of course, if this wasn't my first, it should have been.
What's the most visually beautiful film you've ever seen?
I have been very lucky to see many that would fit into this category, but I'll go with "The Tree of Life." It's not just the images that are so everlasting and gorgeous, but the editing has a visual beauty of its own, which I have yet to see duplicated. But, I'll keep looking.
Who are your favorite leading men, past and present?
Who are your favorite leading ladies, past and present?
Who's your favorite modern filmmaker?
There are many, but I think I'll say Rian Johnson. He is a symbol of homegrown, creative purity, and always strives for more. Plus, he made "The Brothers Bloom," a film I love to death. The fact that he'll direct a "Star Wars" movie is enough for me to be an optimist at the end of the day about this multi-billion dollar business.
Who's your least favorite modern filmmaker?
No one that I outwardly detest, but I am pretty tired of Quentin Tarantino. We get it, dude.
What film do you love that most people seem to hate?
Jared Hess' "Gentlemen Broncos."
What film do you hate that most people love?
"Birdman." "You don't even have a Facebook!" Give me a break.
Tell me about a moviegoing experience you will never forget—not just because of the movie, but because of the circumstances in which you saw it.
A public pre-release screening of "Fast & Furious 6," for the way that an audience reacted when the sound didn't work for the first 10 minutes. Everyone started making cartoonish car sounds during an opening chase scene for a bizarre yet touching communal experience. The movie itself was a particularly good time for me as well.
What aspect of modern theatrical moviegoing do you like least?
Hands down, cell phones. The arrogance that a rogue light displays, as it distracts any viewer in general vicinity, is enough to put someone into a blind rage.
What aspect of moviegoing during your childhood do you miss the most?
An easy answer, but the idea of seeing a film projected on the screen. The cigarette burns let you know it was only a movie, a security that was a great comfort, the potential of what those images would be, endless.
Have you ever damaged a friendship, or thought twice about a relationship, because you disagreed about whether a movie was good or bad?
I don't think so, and I hope this never happens. People see what they want to see, and people like what they like.
What movies have you dreamed about?
Just last week, I had my first "Star Wars: The Force Awakens" dream, but the best part is that since I've barely seen any footage, my dream looked more like a Sidney Lumet movie with X-wing fighter costume design than whatever J.J. Abrams' film actually is. I remember even in my dream, before realizing I wasn't actually watching it, that this version of "Star Wars" was lacking something.
What concession stand item can you not live without?
Buncha Crunch, maybe once every five times I get to the movies. Terrible habit, but at least I am cheap enough to not mix it with popcorn.
Over the course of their first fifteen features, Pixar has made some great films (such as the "Toy Story" series, "Ratatouille" and "Inside Out") and some not-so-great ones (such as anything with the word "Cars" in the title). However, the best of them are the ones that intrigue an initial idea and elaborate upon it with the kind of well-developed characters, ingenious plots and emotional resonance that is rarely seen in films aimed at family audiences. The problem with their latest effort, "The Good Dinosaur," is that it has the intriguing initial idea but then seems curiously unsure of how to pursue it. The end result is a film that has some promising elements and which often seems as if it is on the verge of evolving into something wonderful but never quite manages to turn that particular corner.
The basic conceit of the film is undeniably promising—what might have happened if the asteroid that hit Earth 65 million years ago actually missed its target, and the dinosaurs that were rendered extinct by its impact were able to continue to thrive and evolve as a species? After a brief prologue showing that near-miss, the film jumps ahead a few million years to focus on a family of apatosaurus tending to their farm. Alas, the youngest of the bunch, the runty Arlo (voiced by Raymond Ochoa) is unable to do much and is the butt of teasing from older siblings Buck (Marcus Scribner) and Libby (Maleah Padilla), while his father (Jeffrey Wright) and mother (Frances McDormand) try to assure him that he is destined for greatness. One day, while chasing a feral child (Jack Bright) who has been stealing their crops, the fearful Arlo and his father are caught in a raging rainstorm and parents of more sensitive children better have the Kleenex ready.
While struggling to help his mother bring their crops in before winter arrives, Arlo runs across that same child, who he blames for the death of his father, and while pursuing him, the two fall into the river and are swept many miles down before washing ashore. At first, Arlo hates the kid but the boy, who not only acts like a dog but soon responds to the name Spot, eventually grows on him and the two become friends as they discover they have more in common than one might think. As Arlo and Spot begin the long and perilous journey upstream to Arlo's home, they encounter such dangers as a giant cobra and a trio of pterodactyls (whose leader is voiced by Steve Zahn) whose seemingly laid-back attitude stands in marked contrast to their desire to savage anything they can get their talons on. Somewhat friendlier are a trio of T-Rexes (with the voices of Sam Elliott, Anna Paquin and A.J. Buckley) who are, oddly enough, buffalo ranchers trying to rescue their herd from some rustling raptors.
There are some good ideas in Meg LeFauve's screenplay, such as the idea of inverting the classic boy-and-his-pet narrative so that the boy is the pet, and the way that it threatens to become a full-blown Western with the introduction of the T-Rexes (including a campfire scene complete with someone playing a mournful tune on a "harmonica"). But once it introduces them, the film tends to abandon them in order to tell yet another variation of the tale of a seeming misfit who learns to pulls himself together, and use his gifts to save the day and make his mark on the world. Much of it feels cobbled together from elements that will seem very familiar to anyone who saw the likes of "The Jungle Book," "The Lion King" and "How to Train a Dragon." The lack of a unique story might have been overcome if the characters had been compelling but alas, neither Arlo nor Spot are especially interesting.
Visually, "The Good Dinosaur" is a stunner throughout, with one breathtaking composition after another that combines gorgeously rendered photorealistic backgrounds with the more overtly cartoony characters in an unexpectedly lovely manner. There are also a number of inspired moments where the film threatens to break its shackles and go off into strange areas, like an encounter with a styracosaurus (whose deadpan voice is supplied by the film's director, Peter Sohn) who is festooned with a number of comfort animals. In another scene, Arlo and Spot eat some fruit with hallucinogenic properties that are depicted in amusing visual detail. The aforementioned campfire scene gets especially weird when it turns into, of all things, one of the most famous scenes from "Jaws." There is even one beautifully low-key moment in which Arlo and Spot, despite the lack of a shared language, manage to communicate and commiserate with each other over the loss of their respective families in a genuinely heart-tugging manner. (This moment is so strong that I wouldn't be surprised to learn that it was the initial inspiration for the entire project.)
As those who pay attention to such things already know, "The Good Dinosaur" had a famously troubled production that saw its original director and most of the original voice cast replaced, and a number of major script rewrites added in an effort to save it. With that much behind-the-scenes chaos, it is probably not a surprise that the end result is as uneven as it turns out to be. The film will satisfy younger viewers, I suppose, but unless your kids are especially gaga over dinosaurs, my guess is that even they will recognize that it is lacking a certain something that separates the great films from the ordinary ones.
Reflecting on Nick Berardini's “Killing Them Safely” on the night of Chicago protests for another death of a civilian due by excessive police force, the tension between civilians and law enforcement remains thoroughly despairing. Both sides, regardless of power, have a very human but potentially destructive fear. In the scenarios where a police officer may initially desire humanity, there have to be better methods for cops to properly engage civilians. Berardini’s doc vigorously proves that despite their marketing, Tasers are not the answer. They are only means to a bigger, deadlier problem.
In his directorial debut, Berardini sets his sights on the billion-dollar Taser company, and the men behind it (brothers Rick and Tom Smith) who repeatedly claim in various footage that they have changed the world for the better. Taser is the only company in the world that creates these conducted electrical weapons (or CEWs), as currently used by 17,000 police forces throughout the world. For their police pièce de résistance, Tom and Rick Smith upgraded Jack Cover’s original 1969 Taser to become something that doesn’t just shock the human body, it practically stops it; it’s a type of cheat code to freeze another human being, as they are paralyzed with 50,000 volts running through their body. When Taser’s M26 took off in the early 2000s, it became a phenomenon for the company.
But then, very human elements began to unravel the Taser's perfect facade. As detailed in Berardini's collected news reports, the Taser is revealed to be a type of police crutch, an easier option than physically detaining someone (like a six-year-old at school, or a 56-year-old in a wheelchair as in 2006). Despite its intent as a self-proclaimed, "non-lethal" instrument meant to snuff a disturbance, Tasers aided such excessive police force to the number of 300 Taser-related deaths by 2008. The power of “Killing Them Safely” is that it brings to light some of these cases that may be swept under the rug. It heroically questions that even if the Taser doesn't always kill, why should even one unfortunate loss be acceptable?
Throwing Taser’s poster catchphrase back in their face, Berardini has essentially made a film to “protect the truth and convict the guilty.” Though lacking personality in its indictment, “Killing Them Safely” is armed with vigorous detail and horrific footage of Tasers taking lives on camera, played out in full. A particularly disturbing episode involves the famous case of Robert Dziekański, a frustrated Polish immigrant who was tasered to death in a Canadian airport by Mounties untrained in the proper Taser use. Another involves the death of 23-year-old Stanley Bachtel, who was tasered to death after being pulled over for speeding and freaking out about how the cops were treating him. Like many others, he was shocked into a state of cardiac arrest.
Though focused on an emotionally gripping issue with just too many examples to use, “Killing Them Safely” can be at odds with its dryness, especially when its second half essentially documenting the medical investigations against Taser and the Smith brothers by prosecuting attorneys Peter Williamson and John Burton. The often jaw-dropping, scandalous nature of Taser's defenses is downplayed, despite the can of worms they offer to the current state of police business and even gun control ("Tasers don't kill people, people kill people"). By no coincidence, “Killing Them Safely” doesn’t create compelling characters out of any of its reoccurring talking heads, despite their interactions with the Taser enterprise. Even Berardini’s fervor to burn Taser to the ground with a documentarian’s vengeance comes from an unknown place, an incomplete big picture created in the process.
“Killing Them Safely” may be far too sober to footnote anything like Taser meme “Don’t tase me, bro!”, but it is valiant at damaging a Goliath business where it counts, decimating the purpose and potential of a golden product. Berardini has lots of content here to break the delusions of those who want to believe the peace behind Tasers, or not question the motives within those who hold authority. Especially for those in law enforcement, “Killing Them Safely” should be required viewing before taking taxpayer money to invest in their next attempt of serving and protecting their fellow man.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
This is for all you aspiring photographers, illustrators, and type designers out there! We’ve teamed up with Skillshare to offer you 1 month of unlimited access to all their creative classes, for $0. Free. You can sign up here for a premium account: Booooooom + Skillshare 30-day extended trial.
I’ve been personally trying out some of their classes and I just blazed through this great class on designing a single letter: “One Drop Cap Letterform at a Time” by the amazing Jessica Hische (I’ll always refer to her as the one who did that fantastic Moonrise Kingdom type). She has so many useful tips in this, especially when she talks about properly using handlebars and plotting points in Illustrator.
From what I’ve seen so far these classes are a great way to brush up on something you may have learned a long time ago or a chance to learn something completely new. Full disclosure: Skillshare did not pay us to make this post, however if you sign up through this link it will support our site! Absolutely no pressure to do so – I still recommend their site as a worthwhile resource even if you don’t use our link!
Simplified interface to the GHC API.
“I started playing the guitar about 6 or 7, maybe 7 or 8 years ago. I was influenced by everything at the same time, that’s why I can’t get it together now.”
When you listen to Jimi Hendrix, one of the last things you’re ever likely to think is that he couldn’t “get it together” as a guitarist. Hendrix made the characteristically modest statement in 1968, in a free form discussion about his influences with Rolling Stone’s Jann Wenner and Baron Wolfman. “I used to like Buddy Holly,” he said, “and Eddie Cochran and Muddy Waters and Elvin James… B.B. King and so forth.” But his great love was Albert King, who “plays completely and strictly in one way, just straight funk blues.”
Since Hendrix’s death and subsequent enshrinement in pop culture as the undisputed master of psychedelic rock guitar, a number of posthumous releases have performed a kind of revisionism that situates him not strictly in the context of the hippie scene but rather in the blues tradition he so admired and that, in a sense, he came of age within as a session and backing guitarist for dozens of blues and R&B artists in the early 60s. In 1994 came the straightforwardly-titled compilation album Blues, which celebrated the fact that “more than a third of [Hendrix’s] recordings were blues-oriented,” writes Allmusic’s Richie Unterberger, whether originals like “Red House” and “Hear My Train a Comin’” or covers of his heroes Muddy Waters and Albert King. Martin Scorsese devoted a segment of his documentary series The Blues to Hendrix, and an ensuing 2003 album release featured even more Hendrix blues originals (with “pretty cool” liner notes about his blues record collecting habits). Prolific director Alex Gibney has a documentary forthcoming on Hendrix on the Blues.
It’s safe to say that Hendrix’s blues legacy is in safe hands, and it may be safe to say he would approve, or at least that he would have preferred to be linked to the blues, or classical music, than to what he called “freak-out psychedelic” music, as a Guardian review of Hendrix autobiography Starting at Zero quotes; “I don’t want anybody to stick a psychedelic label around my neck. Sooner Bach and Beethoven.” Or sooner, I’d imagine, blues legends like Albert King, Buddy Guy, and B.B. King, of whom Hendrix sat in awe. At the top of the post, you can see Hendrix flex his Delta blues muscles on a 12-string acoustic guitar. Then in the video below it from 1968, Hendrix gets the chance to jam with Buddy Guy, after watching Guy work his magic from the audience. (Hendrix joins Guy onstage to jam at 6:24.) The audio just above captures a jam session with B.B. King and Hendrix from the same year at New York’s Generation Club, and below, see Guy and King reminiscing a few years ago about those days of meeting and playing with Hendrix.
During their conversation, you’ll learn where Hendrix picked up one of his stage tricks, playing the guitar behind his head—and learn how little Guy knew about Hendrix the rock star, coming to know him instead as a great blues guitarist.
Jimi Hendrix Plays the Delta Blues on a 12-String Acoustic Guitar in 1968, and Jams with His Blues Idols, Buddy Guy & B.B. King is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooks, Free Audio Books, Free Foreign Language Lessons, and MOOCs.
Minecraft has come a long way since [Notch] first thought up the idea that would eventually make him a billionaire. The game can be enjoyed on so many levels and become so engaging that grown adults who should know better spend far more time playing it than working on, say, their backlog of Hackaday posts. As if that weren’t bad enough, now Minecraft threatens to break out of screen with the ability to control a WiFi light bulb from within the game.
For those unfamiliar with Minecraft, it’s an open world game that allows players to interact with blocks of various materials. Players can build, destroy, explore and create landscapes and structures. An active modding community contributes everything from cosmetic texture packs to new block types with extended functionality. It was one of these mods that was leveraged to “break the fourth wall” in Minecraft. [ginannoug] used the OpenComputers mod, which allows placement of programmable in-game computers with a full complement of peripherals, including an Internet connection. That allowed [giannoug] to send commands to his Brand X eBay WiFi light bulb, the protocol for which his friend [Thomas] had previously reverse engineered. Flip a switch in Minecraft and the real-world light bulb comes on instantly. Pretty cool.
We’ve seen quite a few builds where Minecraft blocks inspired real-world lamps, but this is a step beyond and might be a great way to get kids into programming using Minecraft. But it’s not the first time Minecraft has broken the fourth wall – check out this 2012 effort to build a microcontroller-based Minecraft server that can toggle pins from within the game.
[Thanks to aggvan and Stathis K for the near-simultaneous tips!]
I found an interesting problem while working on a test case generator for the Tsukurimashou Project. The thing is that I'd like to assign an identifying code, which I will call an address, to each line of code in a code base. It's to be understood that these addresses have nothing to do with machine memory addresses, and they need not be sequential; they are just opaque entities that designate lines of code. Anyway, I would like lines of code to keep the same addresses, at least probabilistically, when the program is modified, so that when I collect test information about a line of code I can still keep most of it after I update the software.
Haskell implementation of the RNCryptor file format
Generic operations on tuples
The violent, mesmerizing, and brutally honest work of artist Cleon Peterson. In some paintings demon-like characters are attacking people, in other paintings similar actions are being carried out by uniformed figures. What is perhaps most disturbing is the realisation that we are drawn to such darkness; titillating news headlines, and bloody movies. The violence is inside of us all. More images below.
A Haskell library for efficient, concurrent, and concise data access.
Alex is a tool for generating lexical analysers in Haskell
Read more of this story at Slashdot.
[Scott] doesn’t have any kids, but he’s the sort of type that likes to get ahead of the game. Of course this means spending time in his garage to build a rocking cradle. Usually, these are acquired from a baby shower and are powered by batteries. Terribly uncool, considering a mechanism to keep a pendulum swinging has existed for hundreds of years now. His latest project is the escapement cradle – a cradle (or hammock) that keeps rocking with the help of falling weights.
The first video in this series goes over the inspiration and the math behind determining how much energy it will take to maintain a swinging pendulum. The second video goes over a very rough prototype for the escapement mechanism with some woodworking that looks dangerous but is kept well under control. The third video puts everything together, rocking a cradle for about 10 minutes for every time the weight is lifted to the top.
[Scott] has had a few of his projects featured on Hackaday, and he’s slowly becoming the number two mechanized woodworker, right behind [Matthais]. He recently put the finishing touches on the expanding wooden table we saw a year ago, and there are surely even cooler builds in the queue for his YouTube channel.
It’s an application that allows you to keep track of your electronic components. Ever wondered where that chip was? Ever ordered components only to discover later that you already have them? PartsBox allows you to easily manage parts inventory.
PartsBox.io – Electronic parts inventory management software for makers – [Link]
The post PartsBox.io – Electronic parts inventory management software for makers appeared first on Electronics-Lab.
Tony Zhou’s video essay series, Every Frame a Painting, returns with “Buster Keaton: The Art of the Gag.” Although his series never disappoints, this particular installment may be one of Tony’s best, taking you inside the comedic gags of Buster Keaton, a founding father of visual comedy. If you’ve ever found it hard to appreciate the artistry of filmmakers from the silent era, then you will definitely want to give this a watch. And once you’ve taken it all in, you’ll likely want to spend time with our previous post: The General, “Perhaps the Greatest Film Ever Made,” and 20 Other Buster Keaton Classics Free Online. Also don’t miss this collection featuring another founding father of visual comedy: 65 Free Charlie Chaplin Films.
Buster Keaton: The Wonderful Gags of the Founding Father of Visual Comedy is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooks, Free Audio Books, Free Foreign Language Lessons, and MOOCs.
AMS is providing a fast and cost effective IC prototyping service. by Clemens Valens @ elektormagazine.com:
The Full Service Foundry division of ams AG announced its fast and cost-efficient IC prototyping service, known as Multi-Project Wafer (MPW) or shuttle run. The prototyping service combines several designs from different customers onto a single wafer to offer significant cost advantages as the costs for wafers and masks are shared among a number of different participants.
Wafer pooling: low-cost prototyping service for ICs – [Link]
The post Wafer pooling: low-cost prototyping service for ICs appeared first on Electronics-Lab.
Factory tour video of the Teledyne LeCroy factory in in Chestnut Ridge, NY by Sebastian:
Quick look both at & inside the brand new Tektronix TSG4106 6 GHz vector signal generator. They say real beauty lies on the inside and this beauty certainly measures up, inside and out.
Teledyne LeCroy factory tour – [Link]
Much has been made of Mark Twain’s financial problems—the imprudent investments and poor management skills that forced him to shutter his large Hartford estate and move his family to Europe in 1891. An early adopter of the typewriter and long an enthusiast of new science and technology, Twain lost the bulk of his fortune by investing huge sums—roughly eight million dollars total in today’s money—on a typesetting machine, buying the rights to the apparatus outright in 1889. The venture bankrupted him. The machine was overcomplicated and frequently broke down, and “before it could be made to work consistently,” writes the University of Virginia’s Mark Twain library, “the Linotype machine swept the market [Twain] had hoped to corner.”
Twain’s seemingly blind enthusiasm for the ill-fated machine makes him seem like a bungler in practical matters. But that impression should be tempered by the acknowledgement that Twain was not only an enthusiast of technology, but also a canny inventor who patented a few technologies, one of which is still highly in use today and, indeed, shows no signs of going anywhere. I refer to the ubiquitous elastic hook clasp at the back of nearly every bra, an invention Twain patented in 1871 under his given name Samuel L. Clemens. (View the original patent here.) You can see the diagram for his invention above. Calling it an “Improvement in Adjustable and Detachable Straps for Garments,” Twain made no mention of ladies’ undergarments in his patent application, referring instead to “the vest, pantaloons, or other garment upon which my strap is to be used.”
The device, writes the US Patent and Trademark Office, “was not only used for shirts, but underpants and women’s corsets as well. His purpose was to do away with suspenders, which he considered uncomfortable.” (At the time, belts served a mostly decorative function.) Twain’s inventions tended to solve problems he encountered in his daily life, and his next patent was for a hobbyist set of which he himself was a member. After the soon-to-be bra strap, Twain devised a method of improvement in scrapbooking, an avid pursuit of his, in 1873.
Previously, scrapbooks were assembled by hand-gluing each item, which Twain seemed to consider an overly laborious and messy process. His invention—writes The Atlantic in part of a series they call “Patents of the Rich and Famous”—involved “two possible self-adhesive systems,” similar to self-sealing envelopes, in which, as his patent states, “the surfaces of the leaves whereof are coated with a suitable adhesive substance covering the whole or parts of the entire surface.” (See the less-than-clear diagram for the invention above.) The scrapbooking device proved “very popular,” writes the US Patent Office, “and sold over 25,000 copies.”
Twain obtained his final patent in 1885 for a “Game Apparatus” that he called the “Memory-Builder” (see it above). The object of the game was primarily educational, helping, as he wrote, to “fill the children’s heads with dates without study.” As we reported in a previous post, “Twain worked out a way to play it on a cribbage board converted into a historical timeline.” Unlike his first two inventions, the game met with no commercial success. “Twain sent a few prototypes to toy stores in 1891,” writes Rebecca Onion at Slate, “but there wasn’t very much interest, so the game never went into production.” Nonetheless, we still have Twain to thank, or to damn, for the bra strap, an invention of no small importance.
Twain himself seems to have had some contradictory attitudes about his role as an inventor, and of the singular recognition granted to individuals through patent law. Perhaps unsurprisingly, the US Patent Office claims that Twain “believed strongly in the value of the patent system” and cites a passage from A Connecticut Yankee in King Arthur’s Court in support. But in a letter Twain wrote to Helen Keller in 1903, he expressed a very different view. “It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a telephone or any other important thing,” Twain wrote, “and the last man gets the credit and we forget the others. He added his little mite—that is all he did. These object lessons should teach us that ninety-nine parts of all things that proceed from the intellect are plagiarisms, pure and simple.”
Mark Twain’s Patented Inventions for Bra Straps and Other Everyday Items is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooks, Free Audio Books, Free Foreign Language Lessons, and MOOCs.
What’s a hacker going to do with an oven? Reflow solder? Dry out 3D printing filament? If you are [Alicia Gibb] you’d be baking a cake. While complaining that projects aren’t a hack seems to be a favorite past time for Hackaday commentators, we think [Alicia] will be in the clear. Why? Because these cakes have Arduinos, LEDs, and motorized candles among other gizmos.
The Game Boy cake is undeniably cool, although we have to admit the cake that screams when cut got our attention (see video below), even if it would unnerve guests.
As you might expect, you can’t bake the electronics directly into the cake. [Alicia] uses Tupperware or parchment paper to create cavities for the electronics. Connections and other solder joints get professional grade Saran wrap to keep the lead and other awful chemicals out of the cake.
We’ve seen embedded electronics in cakes before, including some that tie into the Star Wars merchandising that seems unavoidable lately. If you aren’t much of a baker, you could always just forego the cake part.
So I've struggled with the mathematics behind probability since elementary school and always just scraped by when I could. I get the concept of probability but all of the math behind it seems to escape my comprehension. This class is by far the hardest I've taken in my life and I'm really struggling with combinations, permutations, when to use which, when to add, when to multiply. Hell the way to do each problem in my homework seems like a "random" chance as to what I have to do to solve it. I'm currently sitting in front of a conditional probability problem... or at least I think that's what it is and I've gone over my notes and read my book and still have no idea how to set this up. My professor rarely shows up to his office hours and no one else I know has taken this class and everyone I talk to in the class seems to be struggling just as much as I am. This class is making me question my major choice right now, and comp sci is all I've wanted to do since junior year of high school. What to do?
Making your own booze involves a lot of sitting around waiting for things to happen, like waiting for the fermentation process to finish so you can get on with bottling and drinking it. That involves watching the bubbles in the airlock: once the frequency of the bubbles falls below a certain level, your hooch is ready for the next step.
[Waldy45] decided to automate this process by building a bubble catcher that measures the frequency of bubbles passing through the airlock. He did this using an optocoupler, a combination of LED and light sensor that changes resistance when something passes between them. You can’t see it in the image, but the horseshoe-shaped optocoupler is slotted around the thin neck in the bubble tube to sense when a bubble passes through.
The optocoupler is connected to an Arduino, running a bit of code that generates an interrupt when the optocoupler is triggered. At the moment, this just outputs an average time between bubbles to the serial port, but [Waldy45] is looking to add an ESP8266 to wirelessly connect the Arduino and contact him when the bubble frequency falls, indicating that the booze is ready for bottling.
We’ve seen a couple of over the top beer breweries before (here and here), but none of them have automated the actual fermentation stage, so something like this would definitely be an addition. Cheers!
Director Todd Haynes proved his knack for capturing high gloss sumptuous period pieces with an underbelly of taboo love in 2002's "Far From Heaven." In his new film, "Carol," he brings to the screen Patricia Highsmith's novel "The Price of Salt" set in 1952 in New York City that portrays a complicated love affair between a wealthy socialite Carol and a young aspiring photographer, Therese.
Australian actress Cate Blanchett gives another flawless performance as the beautiful Carol and Rooney Mara tied for Best Actress award at this year's Cannes Film Festival for her performance as Therese.
Australian film reporter Katherine Tulich sat down with Blanchett, Mara and director Todd Haynes for this video interview.
Given the efforts of people like Malcolm McLaren to turn punk rock into a viable commercial product—or at least a quick cash grab—it’s a little surprising it took as long as it did for “pop punk” to find its profitable 90s/oughties teenage niche. Always a catch-all term for an eclectic variety of styles, punk instead further diversified in the eighties into various kinds of post-punk, hardcore, and new wave. The latter development, however, quickly found a commercial audience, with its successful fusion of 70s pop, reggae, and disco elements with punk’s wry, arty-outsider sensibility. Artists like Gary Numan, Blondie, DEVO, Talking Heads, and even The Clash emerged from the 70s with highly danceable hits that set the tone for the sound of the next decade.
But first the public had to learn what new wave was, and many of them did in a surprisingly mainstream way, in the 1979 special produced by ABC’s 20/20 in two parts here. By comparison with the number of awkwardly clueless or blatantly sensationalistic news reports on emerging youth cultures over the decades, the show is “impressively astute,” writes Dangerous Minds, “for a news segment on new music from one of the major TV networks.” It features a number of the above-named artists—DEVO, Blondie, Talking Heads—and makes an interesting attempt to situate the music on a continuum with Chuck Berry, Buddy Holly, and the Rolling Stones.
The segment claims that new wave both satirized and updated rock and pop—with DEVO’s cover of “(I Can’t Get No) Satisfaction” as Exhibit A. And while new wave would eventually glam it up with the best of the 70s disco acts—think Duran Duran or the bubblegum pop of Flock of Seagulls or Kajagoogoo—in its first, post-punk phase, the music stripped things down to 50s simplicity. Elvis Costello gets called in to represent the revivalism inherent in the nascent form, heralding a “rediscovery of the rock and roll audience.”
There are problems with the history: punk gets labeled “an extreme element of new wave” and “a British phenomenon,” where it makes more sense to call it a precursor with roots in Detroit and New York. It’s a nitpicky point, and one shouldn’t expect too much accuracy in a top-down network news report. The real treat here is the performance clips and rare interviews. Even with the poor video quality, they’re all well worth watching, especially the extended focus on the Talking Heads in the second part above. As Dangerous Minds writes, “it takes an effort of will to remember how weird David Byrne… must have seemed to a mainstream audience in 1979.” Or not. He still comes off as pretty odd to me, and the music still fresh and inventive.
Note: Elvis Costello has just published a new autobiography, Unfaithful Music & Disappearing Ink. And he narrates the audiobook version, which you can download for free (along with another audiobook) if you join Audible.com’s 30-day Free Trial program. Get details on the 30-day trial here. And get Elvis Costello’s audiobook, by clicking here and then clicking the “Try Audible Free” button in the upper right.
via Dangerous Minds
New Wave Music–DEVO, Talking Heads, Blondie, Elvis Costello–Gets Introduced to America by ABC’s TV Show, 20/20 (1979) is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooks, Free Audio Books, Free Foreign Language Lessons, and MOOCs.
French photographer and director Romain Laurent (previously here and here) started making portrait-based GIFs as a way to produce work outside his commercial jobs, a spontaneous project that would encourage him to produce consistently for himself rather than clients. Each GIF is simple in its concept—a snap of the finger, a twist of the hand—yet is elegant in its composition of muted colors and subjects often centered squarely in the frame. Although GIFs often incorporate the whole subject, Laurent’s work highlights one or two specific movements, isolating gestures rather than animating the whole image.
Laurent studied product design at the National School of Applied Arts in Paris before realizing photography was his medium of choice. Laurent nows works in New York City and has collaborated with clients such as Reebok, Hermes, Lacoste, Nissan, Google, and GQ. You can see more of his inventive portraits on his Tumblr, and access his GIFs directly on his Giphy page here.
Pablo Picasso, as you may know, produced a fair few memorable works in his long lifetime. He also came up with a number of quotable quotes. “Every act of creation is first an act of destruction” has particularly stuck with me, but one does wonder what an artist who thinks this way actually does when he creates — or, rather, when he first destroys, then creates. Luckily for us, we can watch Picasso in action, in vintage footage from several different films–first, at the top of the post, in a clip from 1950’s Visite à Picasso by Belgian artist and filmmaker Paul Haesaerts (which you can watch online: part one, part two).
In it, Picasso paints on glass in front of the camera, thus enabling us to see the painter at work from, in some sense, the painting’s perspective. Just above, you can watch another, similarly filmed clip from Visite à Picasso. Both of them show how Picasso could, without much in the way of apparent advance planning or thought, simply begin creating art, literally at a stroke — on which would follow another stroke, and another, and another. “Action is the foundational key to all success,” he once said, words even more widely applicable than the observation about creation as destruction, and here we can see his actions becoming art before our eyes.
It also happens in the clip above, though this time captured from a more standard over-the-shoulder perspective. “The purpose of art is washing the dust of daily life off our souls,” Picasso also said, and one senses something of that ablutionary ritual (and not just because of how little clothing the man has chosen to wear) in the footage below, wherein he lays down lines on a canvas the size of an entire wall. It comes from Henri-Georges Clouzot’s 1956 documentary The Mystery of Picasso, which offers a wealth of close looks at Picasso’s process.
You can watch the film online here, or see a few Picasso paintings come together in time-lapse in the trailer above. “The paintings created by Picasso in this film cannot be seen anywhere else,” the crawl at the end of the trailer informs us. “They were destroyed upon completion of the film.” So it seems that at least some acts of creation, for Picasso himself, not only began with an act of destruction, but ended with one too.
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on a book about Los Angeles, A Los Angeles Primer, the video series The City in Cinema, and the crowdfunded journalism project Where Is the City of the Future? Follow him on Twitter at @colinmarshall or on Facebook.
Travel Back in Time and See Picasso Make Abstract Art is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooks, Free Audio Books, Free Foreign Language Lessons, and MOOCs.
David Farrell and I had an idea for the Heroes of the Computer of the Revolution in a Che Guevara style poster. Then, we still had the idea so we made the t-shirt which we're offering on Kickstarter. If you manage a Perl mongers group, we have a special reward that cuts down on shipping by sending you your own box of shirt.
For those of you who manage a Perl mongers group, we have a special reward that cuts down on shipping by sending you your own box of shirt. Order as a group, or just pass along the Kickstarter notice. We don't expect to make a bunch of these; you'll be one of the cool kids at the next YAPC if you get one.
Oh, and the Che art:
This towering ginkgo tree is located within the walls of the Gu Guanyin Buddhist Temple in the Zhongnan Mountains in China. Every autumn the green leaves on the 1,400-year-old tree turn bright yellow and fall into a golden heap on the temple grounds drawing tourists from the surrounding area. You can see more photos here and here. (via F*ck Yeah Chinese Garden)
At its core, Common Lisp provides two primitives for performing iteration. The first of those primitives is recursion. Recursion is an amazing technique, but in this post I am going to focus on the other primitive – goto.
Goto is extremely powerful. It lets you manipulate the control flow of your program in anyway you can think of. This freedom to do whatever you want is also what makes goto so dangerous. In any given piece of code that uses goto, it is difficult to tell what the purpose of the goto is because it could be used for so many different reasons. Because of this, most languages provide various kinds of builtin loops instead of providing raw goto. Even though loops aren’t as general as goto, they express the intention of the code much more clearly.
As an example, let’s say you want to print all of the characters in a file. If your language provided while loops, you could do this by printing characters from the file one at a time while there are more characters left. If Common Lisp had while loops,1 the code for this procedure would look like this:
(while (peek-char file nil nil) (write-char (read-char file)))
If your language only had goto, it becomes much more difficult to implement the procedure. In the end, you have to, in some way, simulate a while loop. One way to code the procedure with just goto is the following. First check if there are any characters left in the file. If there aren’t any, goto the end. Otherwise print the next character and go back to the start. Here is Common Lisp code that implements this:2
(tagbody start (if (not (peek-char file nil nil)) (go end)) (write-char (read-char file)) (go start) end)
Not only is the version with goto much more verbose, it is also much harder to understand. The code lacks clarity because goto is so general. It gives you no context into how it is being used. The reader of the code will have to think about the positioning of all of the gotos before they can think about the overall flow of the program. On the other hand, in the version with the while loop, merely the fact that a while loop is being used gives whoever is reading the code a decent idea of the control flow.
In reality all loops are eventually compiled down to gotos. Whenever the compiler for a language that provides loops sees a loop, it generates code that simulates the loop through goto. You can do the same thing with Lisp macros!
If you don’t know, Lisp macros are compile time functions which take code as their input and return code as their output. When Lisp code is being compiled, all of the macros in the code are called and each one is replaced with its result. This means you can write a macro that looks like a while loop when you use it, but at compile time generates code to simulate a while loop through goto. You are in effect adding while loops to the Lisp compiler! Here is code that defines such a macro:
(defmacro while (test &body body) (let ((gtop (gensym)) (gend (gensym))) `(tagbody ,gtop (if (not ,test) (go ,gend)) ,@body (go ,gtop) ,gend)))
With this macro, the first code example is now valid lisp code! The while macro takes as arguments a test and a body. It then generates code that uses the method used in the second example to simulate a while loop with goto. You can actually see what the first example looks like after expanding the macro by using the function macroexpand. Here is what the generated code looks like:
(tagbody #:g729 (if (not (peek-char file nil nil)) (go #:g730)) (write-char (read-char file)) (go #:g729) #:g730)
The generated code is the exact same as the code in the second example except for the names of the labels. This means the two examples are the same functionally! The only real difference between them is that the first one is expressed in terms of loops, and the second one is expressed in terms of goto. Since it is so much easier to think in terms of loops than goto, there is no reason why you wouldn’t use the first example over the second.
Macros allow you to build any feature you want as long as it is possible to simulate that feature through lower level features. With respect to goto, this means you can build any kind of control flow construct you want by simulating it with goto and then putting a macro on top. In Common Lisp, all of the looping constructs (do, do*, dotimes, dolist, loop) are really just macros that expand into goto. This is what Alan Kay meant when he said “Lisp isn’t a language, it’s a building material”. It bears repeating. In Lisp, you can build any feature you want as long as it is possible to simulate that feature in terms of lower level features.
I suppose as a longtime fan of Universal monster movies and other forms of classic horror, as well as being, you know, an old man, I can be forgiven for having hoped that this newfangled origin story of a fabled monster maker would be something not entirely awful. Call me a naïve old man. Directed by Paul McGuigan (of “Lucky Number Slevin,” which should have tipped me off a little) from a screen story and script by Max Landis (of whom it can be said, at the very least, that horror appreciation runs in his family, what with his father having made “An American Werewolf In London”), “Victor Frankenstein” is, despite bravura performances from committed young leads Daniel Radcliffe and James McAvoy, all kinds of obnoxious and pointless.
It begins with Radcliffe’s Igor narrating that there’s a story “we all know,” but that the story he’s about to tell is different … and yes, I said Igor. Radcliffe’s at-first-nameless character is introduced as a much-abused circus hunchback who’s also, get this, a self-taught expert in anatomy and biology. I know, right? He pines for circus acrobat Lorelei (Jessica Brown Findlay), and when she suffers a fall, he and med student Victor Frankenstein (visiting the circus for, um, spare animal parts it turns out) perform a reviving miracle on her … and thus a bond is formed. Victor abducts the future Igor from his sideshow captors, in a scene that brings to mind a Guy Ritchie "Sherlock Holmes" movie, only not as good (yes, you read that right, “only not as good”), and installs him in his lab, the better to assist him in his ambitious, perhaps mad, schemes.
Landis’ script is extremely knowing and endlessly allusive. The Frankenstein here is Mary Shelley’s but his backstory includes a brother, Henry, which is the name of the character played in James Whale’s “Frankenstein” from 1933. A police inspector tracking down Victor and his new pal gets an origin story of his own, one that puts him in line to become the Lionel Atwill character if this movie becomes a franchise, which we ought pray it does not. For all the enthusiasm brought to bear, and again, despite the brio of the young cast (McAvoy makes his “let’s create life” speeches with spittle-projecting eagerness), the movie’s a bloody mess, and a needlessly loud one as well.
The official release of the Trans Pacific Partnership (TPP), a global trade agreement between 12 countries including Canada, the United States, and Japan, has generated considerable confusion over where the Trudeau government stands on the deal. The TPP was concluded several weeks before the October election and the Liberals were careful to express general support for free trade, but refrain from embracing an agreement that was still secret.
Over the past month, there have been mixed signals over the issue. Chrystia Freeland, the new Minister of International Trade, has committed to a public consultation and noted that her government is not bound by commitments made by the Conservatives (in the interests of full disclosure, I had the opportunity to meet with Minister Freeland to discuss the TPP earlier this month). Yet following a meeting between Prime Minister Justin Trudeau and U.S. President Barack Obama at the APEC conference in Manila, Obama indicated that he expects Canada to soon be a signatory to the deal.
How to explain the seemingly inconsistent comments on the Canadian position on the TPP? The answer may well lie in the differences between reaching an agreement-in-principle, signing the formal text, and ratifying the deal. Each step is distinct and carries different legal obligations.
The agreement-in-principle occurred in early October during the final round of negotiations in Atlanta. Contrary to reports that Canada “signed” the TPP at the time, there was nothing to sign. The agreement-in-principle closed off the outstanding issues, but the formal text still needed to be finalized. There were some legal implications of the agreement-in-principle, however. For example, the intellectual property chapter includes an annex that permits Canada’s notice-and-notice rules to qualify as an alternative to the TPP’s notice-and-takedown system (which is modeled on U.S. law). The annex states that only countries that have a similar system at the time of the agreement-in-principle can use the exception, effectively creating a Canadian-only rule.
The next formal stage may be the signing of the TPP, which reports indicate could happen in New Zealand as early as February 2016. There will be strong incentives for all TPP negotiating countries, including Canada, to sign the agreement even if they are unsure about whether they will ultimately ratify it. Chapter 30 of the TPP on Final Provisions addresses some of the technical issues associated with the TPP. The chapter grants special rights to “original signatories”, who are the only ones who qualify for the rules related to entry into force of the agreement (in the event that not all TPP countries ratify the agreement within two years, it takes effect once six original signatories which account for 85 percent of the GDP of the original signatories have ratified it). In other words, if Canada does not participate in the signing of the text, it will not be an original signatory and it will not count for the purposes of the TPP taking formal effect.
The benefits of being an original signatory may be what ultimately motivates Canada to sign the TPP and why President Obama expects it do so. However, the TPP would only become binding upon ratification of the agreement. That would require Canada to amend a wide variety of laws to ensure that it is compliant with TPP requirements. From a legal perspective, there is a significant difference between signing a treaty (which represents only a supportive gesture) vs. ratifying a treaty (which creates new legal obligations). Howard Knopf has characterized it as the difference between dating and marriage.
It should be noted that many countries sign but do not ratify treaties. Indeed, Canada has a fair number of international treaties that is has signed but not ratified, including a 1988 Convention on International Bills of Exchange and International Promissory notes. The same is true for the United States, which has signed the United Nations Convention on the Rights of the Child, but has not ratified it.
Canada could find itself in the same position with the TPP. Assuming it signs early next year, there will be still be ample time to conduct a full, open consultation on the treaty. Many have already expressed serious concerns with the implications of the TPP for intellectual property, privacy, Internet governance, and the environment. In light of the mounting concerns, the government could sign the TPP as an original signatory, but still decide to not ratify without changes to the deal.
[this post first appeared on the Centre for Law, Technology and Society blog]
The post Signing vs. Ratifying: Unpacking the Canadian Government Position on the TPP appeared first on Michael Geist.
There is little to no annotation associated with the audio that Brigid Feral posts at soundcloud.com/fferal. The closest she generally comes is a hashtag, such as the “#augmented lute” that appears on the page for her “Violet.” The source audio for her thoroughly transformed sounds can provide the distinguishing factoid, as in “Sound of Friend Peeing,” which, in case the title isn’t clear, has “#pee.” Much of the work she’s posted to SoundCloud starts with some specific sonic basis, and then goes somewhere else entirely. Recent live recordings by Feral, such as one from September 11, and another “Residuum,” posted in the past couple of weeks, use a female voice — presumably her own — as their point of origin.
In the first of these syllables give way to a stuttery beat. In the second there is a delightfully flowery, fluttering affect that is half human, half synthesized.
As for “Violet,” it has a dampened-industrial quality. What is being done to the lute is unclear, but the result is a battery of soft poundings: sawtooth waveforms with their edges rubbed off, beats like a mallet hitting a bag of wet feathers. The rhythm is insistent, but it’s enacted with purposefully unstable resources.
Now, there’s no lute pictured in Feral’s Instagram feed (instagram.com/fferal), but there is some excellent footage of her destroying a piano from the inside:
“Violet” originally posted at soundcloud.com/fferal.
It's time for the eleventh Cambridge OCaml compiler-hacking evening! This time we're heading to central Cambridge, to enjoy all that Pembroke College has to offer.
Where: Outer Parlour, Pembroke College, Cambridge CB2 1RF. Head through the entrance on Trumpington Street, and we'll be there at the Porter's Lodge to direct you.
When: 6pm, Monday 30th November
Who: anyone interested in improving OCaml. Knowledge of OCaml programming will obviously be helpful, but prior experience of working on OCaml internals isn't necessary.
What: fixing bugs, implementing new features, learning about OCaml internals
We're defining "compiler" pretty broadly, to include anything that's part of the standard distribution, which means at least the standard library, runtime, tools (ocamldep, ocamllex, ocamlyacc, etc.), camlp4, ocamlbuild, the documentation, OPAM, and the compiler itself. We'll have suggestions for mini-projects for various levels of experience, but feel free to come along and work on whatever you fancy.
Drinks and finger buffet will be provided.
The Dennis Sharp Archive / the Infocom Cabinet, design documents and minutae from the creation of Infocom’s famous series of text adventures / a MeFi post dredging up this fantastic episode of From A to B, tales of Modern Motoring, a proto reality documentary by Nicholas Barker in which the subjects were allowed to quietly skewer themselves. The accompanying production photographs were taken by Martin Parr and spoke just as much as the series did / Things Cut in Half, from the cross-section obsessive / vaulted spaces discovered within by Matt Simmons / Transforming a motorway flyover. Reader comment: ‘It’s amazing how beautiful, successful and desirable even the worst places can become when you photoshop the hell out of them.’ / Freaky Trigger on the problematic Tintin adventure The Shooting Star / Glastonbury myths ‘made up by 12th-century monks’.
The Duo is a thumb-size development board designed to simplify the process of building Internet of Things (IoT) products. The Duo provides everything you need—Wi-Fi, BLE and a powerful Cloud backend, all in a compact form factor that makes it ideal for your first prototype, a finished product, and everything in between.
We’re also introducing the RBLink, an expansion board for the Duo that allows you to attach additional sensors and modules without any soldering. You’ll have all the tools you need to get your prototype up and running in no time.
RedBear Duo: A small and powerful Wi-Fi + BLE IoT board – [Link]
The post RedBear Duo: A small and powerful Wi-Fi + BLE IoT board appeared first on Electronics-Lab.
Erica Torres @ edn.com discuss about lithium-air batteries that looks promising for future use.
Although scientists are still working toward replacing lithium-ion (Li-ion) batteries with lithium-air (Li-air), or lithium-oxygen, batteries, researchers at the University of Cambridge have developed a lab-based demonstrator of such a battery. It is safe to say we still have another decade before we can begin to utilize such powerful batteries as scientists work to make sure it is stable enough for widespread use.
One step closer to the ‘ultimate battery’ – [Link]
For the full article visit Perl wish list: fixing Pod::Tidy
The song about being happy
Daniel W J Mackenzie’s Four Places for Piano will likely be misread as Four Pieces for Piano. There’s a blurry glimpse of one of the title instruments on the album’s cover. As for whether the piano actually played an active role in the recording of the album, that’s a far more blurry topic. Four Places for Piano is four pieces of long-form, slowly modulating drones. It opens with the highlight, “Diocleia,” which has several pulses set against each other, most noticeably a bell-like ringing that arrives every eight seconds or so. Other elements run through more quickly or more slowly, but that bell tone is the heart of it. At almost 11 minutes in length, “Diocleia” lets the ears fall prey to various cross-pollinations of meter and tone.
Each track on Mackenzie’s Four Pieces for Piano is noticeably distinct from the others, and yet any one of them, once you get three or four minutes in, can, as with much drone music, sound like the background noise of an electrical substation. The similarities are an illusion. Part of the pleasure of Four Pieces for Piano is listening not just within a track, but between them. “Duklja” has more of a sense of urgency than the others; it grows as time passes, occasionally pushing the waveforms into something rough-edged. “Zeta” has an even more pronounced bell than “Diocleia,” here like a carillon caught in a loop. And “Podgorica” distinguishes itself with a slow, crunchy beat amid its already noisy churn.
I am at the CREST Workshop on Predictive Modelling for Software Engineering this week.
Magne Jørgensen, who virtually single handed continues to move software cost estimation research forward, kicked-off proceedings. Unfortunately he is not a natural speaker and I think most people did not follow the points he was trying to get over; don’t panic, read his papers.
In the afternoon I learned that use of machine learning in software engineering research is a bigger train wreck that I had realised.
Machine learning is great for situations where you have data from an application domain that you don’t know anything about. Lets say you want to do fault prediction but don’t have any practical experience of software engineering (because you are an academic who does not write much code), what do you do? Well you could take some source code measurements (usually on a per-file basis, which is a joke given that many of the metrics often used only have meaning on a per-function basis, e.g., Halstead and cyclomatic complexity) and information on the number of faults reported in each of these files and throw it all into a machine learner to figure the patterns and build a predictor (e.g., to predict which files are most likely to contain faults).
There are various ways of measuring the accuracy of the predictions made by a model and there is a growing industry of researchers devoted to publishing papers showing that their model does a better job at prediction than anything else that has been published (yes, they really do argue over a percent or two; use of confidence bounds is too technical for them and would kill their goose).
I long ago learned to ignore papers on machine learning in software engineering. Yes, sooner or later somebody will do something interesting and I will miss it, but will have retained my sanity.
Today I learned that many researchers have been using machine learning “out of the box”, that is using whatever default settings the code uses by default. How did I learn this? Well, one of the speakers talked about using R’s carat package to tune the options available in many machine learners to build models with improved predictive performance. Some slides showed that the performance of carat tuned models were often substantially better than the non-carat tuned model and many people in the room were aghast; “If true, this means that all existing papers [based on machine learning] are dead” (because somebody will now come along and build a better model using carat; cannot recall whether “dead” or some other term was used, but you get the idea), “I use the defaults because of concerns about breaking the code by using inappropriate options” (obviously somebody untroubled by knowledge of how machine learning works).
I think that use of machine learning, for the purpose of prediction (using it to build models to improve understanding is ok), in software engineering research should be banned. Of course there are too many clueless researchers who need the crutch of machine learning to generate results that can be included in papers that stand some chance of being published.
Artist j.frede composes flea market photographs into custom built frames, creating visual and narrative landscapes from the previously unassociated materials. The works spread across the wall, building on each other through similar landscapes or horizon lines. The project, titled Fiction Landscapes, builds on the artist’s interest in memory, tapping into others’ momentos of the past to create fictionalized scenes of ambiguous origin.
Although each image has once been a placeholder in time for the photographer, once it gets collected into a mixed up bin at a flea market these associations are erased. “Arranging these into new landscapes that have never existed speaks to the stitching together of human behavior and how we relate to time and the past,” says Frede. “How many people have pulled over at that rest stop and taken nearly the same photo of the plain hillside? All locking their own associations into the view, first road trip with a new love; last road trip to see grandma; one of many road trips alone.”
The Los Angeles-based artist strictly uses anonymous photographs from the past for his works, never incorporating photographs of his own or individuals he knows. The memories he personally imbues into each composition in the series are instead ones he creates while making each arrangement, placing his own marker within the newly composed environment.
Following is the p5p (Perl 5 Porters) mailing list summary for the past week. Enjoy!
This week I've separated all the proposed patches for issues under a different title: Proposed patches. This is because they do not exactly fit under Updates (which I wish to reserve for news) or under Discussion (since I wish to reserve this for conversations that take place).
I hope this works better.
Feedback is always welcome. :)
Perl 5.23.5 is now available, thanks to Abigail!
Karen Etheridge provided a patched (which was merged since) to clean up the verbosity of some tests due to passing TODO tests in Module::Metadata.
Perl #126667, reported by Dan Collins, a fuzzer-found assertion failure, triggered by the code:
repoted by Todd Rinaldo, mentions a curious case of file handle
\shift, and - as Dave Mitchell expanded on - anything
open()) that attempts to instantiate an anonymous
value into a ref to a typeglob.
reported by Lukas Mai, mentions that
local is not working as
expected in embedded code in regexes.
mylist" shouldn't trigger for globs.
finddies when an empty directory list is supplied.
Regarding possible unintended mix of POD and code, Aaron Crane provides a patch to add a warning for such cases.
Ed Avis provided patches for rewording of lookahead vs. look-ahead to be consistent.
Tony Cook provided a patch to resolve Perl #126635.
Karl Williamson found a reported problem with an AIX test to likely be a stack overflow problem.
Christian Hansen provided patches to make UTF-X validation about 50% - 300% faster.
Karl Williamson adds more light to the conversation on bitwise string operators.
On Rhizome's front page this week is Shelley Jackson's my body - a Wunderkammer (1997), a semi-autobiographical hypertext narrative that combines text and image in an exploration of a personal bodily history. Clicking on areas of a white-on-black woodcut-style portrait of a woman's body brings up pages dedicated to specific body parts—the elbow, hip, toenail, or a tattoo—with first-person anecdotes and meditations.
The work reflects a broader 1990s tendency toward feminist autobiography in hypertext literature. In her 1999 article "Wired Women Writing," for example, Laura Sullivan argued that hypertext's fragmented, multilinear qualities built on this existing literary tradition, which had the potential to "connect the feminist call to value women's personal experience with both the postmodern belief that discourse produces our understandings of our 'selves' and the materialist feminist recognition that our experiences are situated in history..."
Gav and Dan over at the Slow Mo Guys are famous for creating bizarre (and usually explosive) events in front of powerful HD slow motion cameras. Almost all of their videos are worth a watch, but their latest involving a spinning tornado of fire is especially great, skip ahead to 1:25 for the good stuff. Although this particular flamey vortex was created artificially using box fans, you can sometimes see real fire tornadoes in the middle of forest fires or spinning off from the plumes near an active volcano.
Depict the world around you in fascinating detail using a relaxed, methodical approach. With artist and instructor Steven Reddy as your guide, capture highly detailed scenes as you learn techniques for creating contour drawings, grisaille underpaintings, beautiful watercolors, and more. Enroll in the online Craftsy class, Dynamic Detail in Pen, Ink & Watercolor, for 50% off today — a special offer for Colossal readers.
In these online-video lessons, you’ll learn how to break down a complex scene into an initial sketch that’s light and loose. Then, create contour drawing and a grisaille underpainting that will bring extraordinary dimension to your work, before using a limited watercolor palette to enhance your piece with harmonious color. Finally, finish your work by using contour lines to suggest shaping and hatch marks to create texture.
Visit Craftsy.com today to get 50% off lifetime access to the online class, Dynamic Detail in Pen, Ink & Watercolor, and give it a try risk-free with Craftsy’s 100% money-back promise. Offer expires November 30, 2015 at 11:59pm MT.
Ten years ago, a team lead by Irina Conboy at the University of California at Berkeley showed something remarkable in a Nature paper: if you take old cells and put them in a young environment, you effectively rejuvenate them. This is remarkable work that was cited hundreds of times.
Their work shows that vampire stories have a grain of truth in them. It seems that old people could be made young again by using the blood of the young. But unlike vampire stories, this is serious science.
So whatever happened to this work? It was cited and it lead to further academic research… There were a few press releases over the years…
But, on the whole, not much happened. Why?
One explanation could be that the findings were bogus. Yet they appear to be remarkably robust.
The theory behind the effect also appears reasonable. Our bodies are made of cells, and these cells are constantly being reconstructed and replenished. As you age, this process slows down.
Some scientists believe that the process slows down to protect us from further harm. It is like driving an old car: you do not want to push it too hard so you drive ever more slowly as the car gets older. Others (like Conboy I suspect) appear to believe that it is the slowing down of the repair itself that causes ill-health as we age.
But whatever your favorite theory is… what Conboy et al. showed is that you could re-activate the repair mechanisms by fooling the cells into thinking that they are in a young body. At the very least, this should lead to an increased metabolism… with the worst case scenario being a much higher rate of cancer and related diseases… and the best case being a reversal of aging.
We have some elegant proof of principles, like the fact that oxytocin appears to rejuvenate old muscles so that they become seemingly indistinguishable from young muscles. (You can order oxytocin on Amazon.com.)
So why did we not see much progress in the last ten years? Conboy et al. have produced their own answer regarding this lack of practical progress:
If all this has been known for 10 years, why is there still no therapeutics?
One reason is that instead of reporting broad rejuvenation of aging in three germ layer derivatives, muscle, liver, and brain by the systemic milieu, the impact of the study published in 2005 became narrower. The review and editorial process forced the removal of the neurogenesis data from the original manuscript. Originally, some neurogenesis data were included in the manuscript but, while the findings were solid, it would require months to years to address the reviewer’s comments, and the brain data were removed from the 2005 paper as an editorial compromise. (…)
Another reason for the slow pace in developing therapies to broadly combat age-related tissue degenerative pathologies is that defined strategies (…) have been very difficult to publish in high impact journals; (…)
If you have not been subject to peer review, it might be hard to understand how peer comments can slow down researchers so much… and even discourage entire lines of research. To better understand the process… imagine that you have to convince four strangers of some result… and the burden is entirely on you to convince them… and if only just one of them refuses to accept your argument, for whatever reason, he may easily convince an editor to reject your work… The adversarial referee does not even have to admit he does not believe your result, he can simply say benign things like “they need to run larger or more complicated experiments”. In one project I did, one referee asked us to redo all the experiments in a more realistic setting. So we did. Then he complained that they were not extensive enough. We extended them. By that time I had invested months of research on purely mundane tasks like setting up servers and writing data management software… then the referee asked for a 100x extension of the data sizes… which would have implied a complete overhaul of all our work. I wrote a fifteen-page rebuttal arguing that no other work had been subjected to such levels of scrutiny in the recent past, and the editor ended up agreeing with us.
Your best strategy in such case might be to simply “give up” and focus on producing “uncontroversial” results. So there are research projects that neither I nor many other researchers will touch…
I was reminded of what a great computer scientist, Edsger Dijkstra, wrote on this topic:
Not only does the mechanism of peer review fail to protect us from disasters, in a certain way it guarantees mediocrity (…) At the time, it is done, truly original work—which, in the scientific establishment, is as welcome as unwanted baby (…)
Dijkstra was a prototypical blogger: he wrote papers that he shared with his friends. Why can’t Conboy et al. do the same thing and “become independent” of peer review? Because they fear that people would dismiss their work as being “fringe” research with no credibility. They would not be funded. Without funding, they would quickly lose their laboratory, and so forth.
In any case, the Conboy et al. story reminds us that seemingly innocent cultural games, like peer review, can have a deep impact on what gets researched and how much progress we make over time. Ultimately, we have to allocate finite resources, if only the time of our trained researchers. How we do it matters very much.
Thankfully, since Conboy et al. published their 2005, the world of academic publishing has changed. Of course, the underlying culture can only change so much, people are still tailoring their work so that it will get accepted in prestigious venues… even if it makes said work much less important and interesting… But I also think that the culture is being transformed. Initiatives like the Public Library of Science (PLoS) launched in 2003 have showed the world that you could produce high impact serious work without going through an elitist venue.
I think that, ultimately, it is the spirit of open source that is gaining ground. That’s where the true meaning of science thrived: it does not matter who you are, what matters is whether you are proposing works. Good science is good science no matter what the publishing venue is… And there is more to science than publishing papers… Increasingly, researchers share their data and software… instead of trying to improve your impact through prestige, you can improve your impact by making life easier for people who want to use your work.
The evolution of how we research may end up accelerating research itself…
Hovertext: Also, birthdays will be replaced by nulldays in order to gather data.
In the early years of the internet, the sense of discovery often outweighed the quality and interest of what one actually discovered. At every corner, there seemed to be an outpouring of folk art and taxonomy and presentation and personal accumulation, packaged up for presentation in this new medium with scarcely a care if anyone else clicked through. Most of the initial impetus behind things‘ online presence was to track and report back on this, inline with many of the other early (and inspirational) weblogs. However, the optimism and enthusiasm that characterised the first five or so years of the internet/weblog boom has largely been buried beneath a mudslide of cynicism and clickbait.
Every discovery was once an insight into a hidden collectomania, a delight in display that revealed taxonomies that might otherwise have been lost or at the very least overlooked (orange crate labels, Soviet electronics, bottle caps, punched cards – Coudal’s Museum of Online Museums – the MoOM, is especially good at chronicling this output). Regrettably perhaps, this emphasis on specialism has become commonplace and the esoteric is now the everyday. Everyone states an interest in craft and skill and ‘creativity’, but what really seems to make a thing stand out on the contemporary internet is a striking blend of the eccentric and the skilful, the intangible qualities of the ‘viral object,’ as opposed to the quiet joy of individual discovery.
What should you do if you’re worried that someone might have exploited a compiler bug to introduce a backdoor into code that you are running? One option is to find a bug-free compiler. Another is to run versions of the code produced by multiple compilers and to compare the results (of course, under the additional assumption that the same bug does not affect all the compilers). For some programs, such as those whose only effect is to produce a text file, comparing the output is easy. For others, such as servers, this is more difficult and specialized system support is required.
Today we’ll look at using Varan the Unbelievable to defeat the sudo backdoor from the PoC||GTFO article. Varan is a multi-version execution system that exploits the fact that if you have some unused cores, running additional copies of a program can be cheap. Varan designates a leader process whose system call activity is recorded in a shared ring buffer, and one or more follower processes that read results out of the ring buffer instead of actually issuing system calls.
Compilers have a lot of freedom while generating code, but the sequence of system calls executed by a program represents its external behaviour and in most cases the compiler is not free to change it at all. There might be slight variations e.g., due to different compilers using different libraries, but these can be easily handled by Varan. Since all correctly compiled variants of a program should have the same external behaviour, any divergence in the sequence of system calls across versions flags a potential security attack, in which case Varan stops the program before any harm is done.
Typically, Varan runs the leader process at full speed while also recording the results of its system calls into the ring buffer. However, when used in a security-sensitive setting, Varan can designate some system calls as blocking, meaning that the leader cannot execute those syscalls until all followers have reached that same program point without diverging. For sudo, we designate execve as blocking, since that is a point at which sudo might perform an irrevocably bad action.
So here’s the setup:
Now let’s visit an Ubuntu 14.04 VM where both versions of sudo (setuid root, of course) and Varan are installed. We’re using a user account that is not in the sudoers file — it should not be allowed to get root privileges under any circumstances. First let’s make sure that a sudo that was properly compiled (using GCC) works as expected:
$ /home/varan/sudo-1.8.13/install/bin/sudo-gcc cat /etc/shadow
test is not in the sudoers file. This incident will be reported.
Next, we make sure that the backdoor is functioning as intended:
$ /home/varan/sudo-1.8.13/install/bin/sudo-clang cat /etc/shadow
So far so good. Next let’s try the gcc-compiled sudo as the leader with the backdoored sudo as the follower:
$ vx-suid /home/varan/sudo-1.8.13/install/bin/sudo-gcc \
/home/varan/sudo-1.8.13/install/bin/sudo-clang -- cat /etc/shadow
test is not in the sudoers file. This incident will be reported.
What happened here is that the gcc-compiled leader runs as before, since it doesn’t ever try to execute an execve call. When the backdoored follower tries to execute the malicious execve call, Varan detects the divergence and terminates both processes safely.
Now let’s try switching around the leader and follower, i.e., run the backdoored sudo as the leader with the gcc-compiled sudo as the follower:
$ vx-suid /home/varan/sudo-1.8.13/install/bin/sudo-clang \
/home/varan/sudo-1.8.13/install/bin/sudo-gcc -- cat /etc/shadow
This time the leader tries to execute the malicious execve call, and Varan blocks its execution until the follower reaches the same system call or diverges. In this case, the follower tries to execute a write system call (to print “test is not in the sudoers file...”) and thus Varan detects divergence and again terminates execution safely.
In this example, we only ran two versions in parallel, but Varan can run more than two versions. In terms of performance and resource utilization, security applications like sudo are a great match for multi-version execution: they are not CPU-bound, so any performance degradation is imperceptible to the user, and the extra cores are needed only briefly, during the critical security validation checks. We are looking into applying this approach to other critical security applications (e.g. ssh-agent and password managers), and are investigating a way of hardening executables by generating a single binary with Varan and a bunch of versions, each version generated by a different compiler. We can then deploy this hardened executable instead of the original program.
Of course, Varan can detect misbehavior other than compiler-bug-based backdoors. Divergence could be caused by a memory or CPU glitch, by a plain old compiler bug that is triggered unintentionally instead of being triggered by an adversarial patch, or by a situation where an application-level undefined behavior bug has been exploited by only one of the compilers, or even where both compilers exploited the bug but not in precisely the same way. A nice thing about N-version programming at the system call level is that it won’t bother us about transient divergences that do not manifest as externally visible behaviour through a system call.
We’ll end by pointing out a piece of previous work along these lines: the Boeing 777 uses compiler-based and also hardware-based N-version diversity: there is a single version of the Ada avionics software that is compiled by three different compilers and then it runs on three different processors: a 486, a 68040, and an AMD 29050.
Welcome to the ninety-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury. Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to firstname.lastname@example.org, or @llvmweekly or @asbradbury on Twitter.
The canonical home for this issue can be found here at llvmweekly.org.
LLVM/Clang 3.7.1-rc2 has been tagged. As always, help testing is appreciated.
Clasp 0.4 has been released. Clasp is a new Common Lisp implementation that uses LLVM as a compiler backend and aims to offer seamless C++ interoperation.
Quentin Colombet has shared a plan for moving forwards with global instruction selection, as proposed in his Dev Meeting talk. There's a lot of enthusiasm for this work, though some questions about how in practical terms the development should proceed and be tested. There is also hope that this new work will allow the distinction between integers and pointers to be preserved through to MachineInstructions. This is useful both for GC and for architectures where pointers aren't integers.
Eric Christopher has shared a summary of discussions from the recent Birds of a Feather discussion on the LLVM C API. This includes proposed policy for stability guarantees and extending the APIs.
Ed Maste has been experimenting with linking the FreeBSD base system with lld. With a few extra patches he's managed to link the whole FreeBSD userland.
Artem Dergachev has shared some minutes from a call about summary-based inter-procedural analysis for Clang's static analyser.
Steve King is concerned about recent code size regressions with Os. The issue was bisected to recent changes to the heuristic for merging conditional stores. James Molloy, who authored the patch in question suggests more investigation is necessary.
Rail Shafigulin is working on a custom VLIW architecture and has had a number of questions about the DFAPacketizer. Krzysztof Parzyszek has provided useful answers each time - well worth a read of these threads if you're doing any work with VLIW or want to learn more about DFAPacketizer.
Nick Johnson pointed out an interesting potential bug in the LiveVariables pass. There haven't been any responses yet, but he has followed up with a patch to fix the issue.
Amjad Aboud has posted a detailed RFC on ensuring LLVM debug info supports all lexically scoped entities. He includes a simple example which shows where block-local typedefs or class definitions can lead to problems.
Initial support for value profiling landed. r253484.
It is now possible to use the
-force-attribute command-line option for specifying a function attribute for a particular function (e.g. norecurse, noinline etc). This should be very useful for testing. r253550.
The built-in assembler now treats fatal errors as non-fatal in order to report all errors in a file rather than just the first one encountered. r253328.
Support for prelinking has been dropped. See the commit message for a full rationale. r253280.
Clang should now be usable for CUDA compilation out of the box. r253389.
When giving the
-mcpu/-march options to Clang targeting ARM, you can now specify
I spent October mainly working on two things.
First, I optimised some common arithmetic operators: + - *, so that for the very common case of both args being simple ints in ranges that won't overflow, or both being floats, a simple C-level + or whatever can be directly done. For more complex or mixed args, it falls back to the existing slower code. For ++ and --, I optimised the simple integer case. I also improved the core SET[iun], PUSH[iun] and XPUSH[iun] macros, which set the pad targ to a numeric value and push it on the stack. Since PADTMPs and lex vars (which is typically what a pad targ is) are often used in the same way, e.g. always assigned and used as an integer, those macros now check whether the targ is already of the right type, and if so directly set the value, rather than just blindly calling sv_setiv() etc. The combination of the above makes the nbody benchmark (lots of floating-point vector arithmetic) about a third faster.
Second, I continued overhauling perl's context stack and dynamic scope implementation.
59:16 #124156: death during unwinding causes crash
0:09 [perl #117341] av_undef's POD is confusing
0:33 [perl #125937] 'x' operator on list causes segfault with possible stack corruption
4:35 [perl #126082] unshift to @ISA
1:11 [perl #126145] Problem with stack moving fix for Perl_load_module
2:23 [perl #126170] Assertion failed: S_finalize_op (op.c:2562)
1:55 [perl #126229] POSIX::strerror() clears $!
2:27 [perl #126309] 'x' operator on list causes segfault and confuses valgrind, 64-bit version
0:29 [perl #126472] Bleadperl v5.23.3-33-g6768377 breaks HANENKAMP/Tie-Simple-1.03.tar.gz
0:32 add perldelta entries
13:00 make arithmetic faster
0:34 optimise the Boyer-Moore string finder (as used in REs and index())
11:45 process p5p mailbox
98:49 Total (HH::MM)
As of 2015/10/28: since the beginning of the grant:
1600.0 total hours
15.0 average hours per week
There are 0 hours left on the grant.