You are here

Feed aggregator

0x6B: GPL Enforcement Investigation DMCA Exemption Request

FAIF - Thu, 2021/01/14 - 11:00

Software Freedom Conservancy filed multiple exemptions in the USA Copyright Office Triennial Rulemaking Process under the Digital Millennium Copyright Act (DMCA). In this episode, Karen and Bradley explore the details of Conservancy's filing to request permission to circumvent technological restriction measures in order to investigate infringement of other people's copyright, which is a necessary part of investigations of alleged violations of the GPL and other copyleft licenses.

Show Notes: Segment 0 (00:39)
  • Bradley claims that you'll now love the audcast more than ever (02:51)
  • Conservancy filed many exemptions as part of the currently ongoing triennial DMCA Process. (02:50)
Segment 1 (04:22) Segment 2 (28:07) Segment 3 (34:36)

If you are a Conservancy Supporter as well as being a FaiFCast listener, you can join this mailing list to receive announcements of live recordings and attend them through Conservancy's Big Blue Button (BBB) server.

Send feedback and comments on the cast to <>. You can keep in touch with Free as in Freedom on our IRC channel, #faif on, and by following Conservancy on on Twitter and and FaiF on Twitter.

Free as in Freedom is produced by Dan Lynch of Theme music written and performed by Mike Tarantino with Charlie Paxson on drums.

The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).

Categories: Free Software

Topics for our Virtual Meeting on 1/14

PLUG - Tue, 2021/01/12 - 15:05

We'll have 2 presentations this month, A conversation with Bradley Kuhn about the Software Freedom Conservancy and (ab)Using DNS

Attend the meeting on Thursday November 12th at 7PM by visiting:

Bradley Kuhn: A conversation with Bradley Kuhn about the Software Freedom Conservancy

We're going to start the new year off with an interview. Bradley and our host will chat about the Software Freedom Conservancy, Free Software and licensing and his history in the Free Software movement.

Bradley M. Kuhn is the Policy Fellow and Hacker-in-Residence at Software Freedom Conservancy and editor-in-chief of Kuhn began his work in the software freedom movement as a volunteer in 1992, when he became an early adopter of Linux-based systems, and began contributing to various Free Software projects, including Perl. He worked during the 1990s as a system administrator and software developer for various companies, and taught AP Computer Science at Walnut Hills High School in Cincinnati. Kuhn's non-profit career began in 2000, when he was hired by the FSF. As FSF's Executive Director from 2001–2005, Kuhn led FSF's GPL enforcement, launched its Associate Member program, and invented the Affero GPL. Kuhn was appointed President of Software Freedom Conservancy in April 2006, was Conservancy's primary volunteer from 2006–2010, and has been a full-time staffer since early 2011. Kuhn holds a summa cum laude B.S. in Computer Science from Loyola University in Maryland, and an M.S. in Computer Science from the University of Cincinnati. Kuhn's Master's thesis discussed methods for dynamic interoperability of Free Software programming languages. Kuhn received the O'Reilly Open Source Award in 2012, in recognition for his lifelong policy work on copyleft licensing. Kuhn has a blog and co-hosts the audcast, Free as in Freedom.

Donald Mac McCarthy: (ab)Using DNS

Let's take a look at DNS from a new perspective. In this presentation we will discuss using DNS for cost savings and speed increases in datastore operations. We will also take advantage of DNS's unique architecture and capabilities to improve redundancy and increase distribution for near zero cost. Finally, I will show how to push security farther toward the entry into the technology stack and make our applications a part of our security posture.

Radar trends to watch: January 2021

O'Reilly Radar - Tue, 2021/01/05 - 04:40

The last month of the old year showed a lot of activity on the border of AI and biology. The advances in protein folding with deep learning are a huge breakthrough that could revolutionize drug design. It’s important to remember the role AI had in developing the vaccine for COVID—and also worth remembering that we still don’t have an anti-viral. And while I didn’t list them, the other big trend has been all the lawyers lining up to take shots at Google, Facebook, et al. Some of these are political posturing; others address real issues. We said recently the tech industry has had a free ride as far as the law goes; that’s clearly over.

AI, ML, and Data
  • IBM has demonstrated that neural networks can be trained on 4-bit computers with minimal loss of accuracy and significant savings in power.
  • AI ethics researcher Timnit Gebru was fired from Google. Her contributions include the papers Datasheets for Datasets, Model Cards for Model Reporting, Gender Shades (with Joy Buolamwini), and founding the group Black in AI.  This is a severe blow to Google’s commitment to ethics in artificial intelligence.
  • Debt, poverty, and algorithms: Opaque algorithms used for credit scoring, loan approval, and other tasks will increasingly trap people in poverty without explanation.
  • The past month’s biggest success in AI had nothing to do with language. DeepFold, DeepMind’s application of deep learning to protein folding, has made significant progress in predicting the structure of proteins. Predicting protein structure is computationally very difficult, and critical to drug discovery.
  • Microsoft points out that reverse engineering ML models are easily copied and reverse-engineered. Part of the solution may be setting up a deployment pipeline that allows you to change the system easily.
  • Integration between Python and Tableau: Tableau has proven itself as a platform for data visualization and business analytics.  Python is well-established as a language for data analysis and machine learning. What could be more natural than integration?
  • An attack (now known as Sunburst) by Russian’s CozyBear organization have penetrated the U.S. Commerce, Treasury, and Homeland Security departments, in addition to an unknown number of corporations. The attack came through malware planted in a security product from SolarWinds. It still isn’t known exactly what data has been accessed, or how to rebuild infrastructure that has been compromised. The attack may well be the most serious in cyber-history.
  • Acoustic side channels: A new and important front in the struggle for privacy and data security. It’s possible for an Alexa-like device to discover what someone typed on their phone by listening to the taps.
  • Some serious streaming: The world’s highest volume real-time streaming system is built with Go.  It streams stock quotes at up to 3 million messages per second.
  • Are Dart and Flutter catching on? I’ve been very skeptical.  But Business Insider thinks so.  (Forgive the paywall.) In any case, we need alternatives for web development.
  • Solving the travelling salesman problem in linear time: not with a quantum computer, but with an analog computer that models the behavior of amoebae! It’s an unexpected way to solve an NP-hard problem, and begs the question of whether analog computers can be integrated with digital ones–a suggestion that Von Neumann made early in the history of computing.
Operating Systems
  • Google’s FuschiaOS, a possible replacement for the Android’s Linux kernel, is now “open for contributions.” We see new programming languages almost on a daily basis, but new operating systems are rare. This could be an important event.
  • The end of CentOS Linux?  RedHat is killing CentOS Linux, and wants to move users to CentOS Stream, which appears to be a pre-release of the next RHEL (Red Hat Enterprise Linux) version–not a stable release. The community isn’t buying it.  CentOS may live on independently as Rocky Linux.
Quantum Computing
  • Quantum Supremacy with BosonSampling: Boson sampling is another computation that exists only to demonstrate quantum supremacy. It’s not useful, aside from showing that quantum computing is definitely on the way!
Biology and Medicine
  • A new drug appears to restart the brain’s processes for creating new proteins and, as a result, reverses cognitive decline due to aging. So far, experiments have only been performed on mice.
  • CRISPR is being used to engineer pigs so that they’re immune to a fatal and widespread virus called PRRS. Accuracy isn’t great (CRISPR is harder in practice than in theory), but there’s the potential for creating a breed of pigs that aren’t vulnerable to the disease.
  • The one bit of good news in the coronavirus story is that we’re seeing the fastest vaccine rollout in history.  But the Moderna (and Pfizer) vaccines were developed within days after the virus’ DNA was sequenced. The rest of the time has been spent testing. Can testing regimes be designed that are safe, effective, and much faster?
  • A sort of cyborg: drones using live moth antennas to detect scent. This could be used to detect explosives, trapped humans, gas leaks, anything identifiable by small.  The antenna lives for a few hours after being removed from the moth. Presumably the moth doesn’t.
  • NextMind is shipping a relatively inexpensive ($399) development kit for brain interfaces. Their interface is non-invasive and relatively small: a headband with a lump on the back. Still no killer app, though.
  • Molecular analysis with Smart Phones: We thought that phone vendors had run out of sensors to add. We were wrong. Near infrared spectroscopy enables many health applications.
  • Leaving Silicon Valley: Tesla, now Oracle. Who’s next? And what will departing companies do to the real estate market?
  • Twitter’s proposal for Bluesky Identity, portable identity between social media platforms, was greeted with some skepticism when it launched roughly a year ago.  Tim Bray’s take on it is worth reading; it’s the “simplest thing that could possibly work” to enable cross-provider conversations.
  • Facebook’s cryptocurrency, Libra, is finally due to launch, possibly this month, if anyone cares.  And its name has changed to Diem. It’s much less ambitious and still faces regulatory issues, particularly in Europe.
Categories: Technology

Four short links: 14 Dec 2020

O'Reilly Radar - Tue, 2020/12/15 - 07:52
  1. End-to-end Entity Resolution for Big DataIntroduction to the entity resolution pipeline and the algorithms at the different stages. Includes a summary of open source tools and their features. (via Adrian Colyer)
  2. 33 Engineering Challenges of Building Mobile Apps at ScalePart 1, covering the first 10, is up. They are: 1. State management; 2. Mistakes are hard to revert; 3. The long tail of old app versions; 4. Deeplinks; 5. Push and background notifications; 6. App crashes; 7. Offline support; 8. Accessibility; 9. CI/CD and the build train; 10. Device & OS fragmentation.
  3. Cognitive Effort vs Physical PainWe found that cognitive effort can be traded off for physical pain and that people generally avoid exerting high levels of cognitive effort. This explains why more people don’t use (your favourite editor).
  4. If-Then-Else Had to be Invented — The history of where “else” came from, and it’s a fascinating archaeological romp through the ages of programming. E.g., Flow-Matic, Grace Murray Hopper’s predecessor to COBOL, made the three-way if a little easier to think about by talking about comparing two numbers instead of about the signs of numbers. It introduced the name “otherwise” for the case where the comparison wasn’t what you were looking for.
Categories: Technology

Four short links: 8 Dec 2020

O'Reilly Radar - Tue, 2020/12/15 - 07:49
  1. TextAttackFramework for generating adversarial examples for NLP models. (Paper) (via The Data Exchange)
  2. Measuring Developer ProductivityThere is no useful measure that operates at a finer grain than “tasks multiplied by complexity.” Measuring commits, lines of code, or hours spent coding, as some tools do, is no more useful at a team scale than it is at an individual scale. There simply is no relation between the number of code artifacts a team produces, or the amount of time they spend on them, and the value of their contributions. When engineering managers gather in the hotel bar after the conference day ends, this is one of the subjects they will debate endlessly.
  3. Legacy CodeAll the things I wish I’d known twenty years ago. The top-level bullet-points: (1) Writing code isn’t the limiting factor; (2) Start with “why”; (3) Reduce the feedback loop; (4) Make people collaborate; (5) Different strategies to approach Legacy Code.
  4. Distributed Systems Reading ListI often argue that the toughest thing about distributed systems is changing the way you think. Here is a collection of material I’ve found useful for motivating these changes.
Categories: Technology

O’Reilly’s top 20 live online training courses of 2020

O'Reilly Radar - Wed, 2020/12/09 - 07:36

2020 has been a year of great challenges for so many, but it’s not all negative. Around the world, organizations and their workforces have risen to the occasion, recognizing the importance of expanding their knowledge, taking on new tasks, and bettering themselves both personally and professionally. With the uptick in virtual conferencing, remote work, and, for some, reentering the job market, new technology adoption was accelerated, driving the workforce to build new skills. While 2020 was the year of the global COVID-19 pandemic, it will also be commemorated as the year online learning prevailed. As vaccine development persists and life gets back to normal, with it will come a more future-proof workforce ready to share their new knowledge with the world. 

Since the onset of the pandemic, online courses and programs have seen dramatic spikes in consumption and enrollment, and O’Reilly has been no different. A big contributor to O’Reilly’s continued success during these unprecedented times has been its live virtual training courses. This year, more than 900,000 users have registered for live events through O’Reilly online learning—a 96% increase from last year. This functionality also allowed O’Reilly to introduce its Superstream Series, a new lineup of virtual conferences featuring expert speakers delivering talks and training sessions on the most important topics and emerging trends in technology. 

So what are the trends driving this uptick in learning? Companies are increasingly interested in understanding how to successfully adjust to remote work and effectively manage time. And individual O’Reilly members are looking to build and expand on their technical skills in everything from software architecture and microservices to AI and programming languages. But which topics are the brightest minds in technology most focused on? We’ve compiled the top 20 live online training courses of 2020 to shed some light on what those in the know want to know.

Top 20 live online training courses of 2020

  1. Software Architecture Superstream Series: Software Architecture Fundamentals
  2. Microservice Fundamentals
  3. Kubernetes in Three Weeks
  4. O’Reilly Infrastructure & Ops Superstream: SRE Edition
  5. Fundamentals of Learning: Learn Faster and Better Using Neuroscience
  6. Strata Data & AI Superstream Series: Deep Learning
  7. Microservices Architecture and Design
  8. Machine Learning from Scratch
  9. Leadership Communication Skills for Managers
  10. Design Patterns Boot Camp
  11. Strata Data and AI Superstream
  12. Getting Started with Python 3
  13. Python Data Science Full Throttle with Paul Deitel: Introductory Artificial Intelligence (AI), Big Data, and Cloud Case Studies
  14. Getting Started with Amazon Web Services (AWS)
  15. Architectural Katas
  16. Introduction to Critical Thinking
  17. Python Full Throttle with Paul Deitel
  18. Microservice Collaboration
  19. OSCON Open Source Software Superstream Series: Live Coding—Go, Rust, and Python
  20. SOLID Principles of Object-Oriented and Agile Design

For a more in-depth analysis of the hot technology topics of 2020, based on data from O’Reilly online learning, stay tuned for our upcoming report, Wrapping Up 2020 (and What to Expect for 2021): Trends on O’Reilly online learning.

Categories: Technology

What is functional programming?

O'Reilly Radar - Tue, 2020/12/08 - 10:19

It has long seemed to me that functional programming is, essentially, programming viewed as mathematics. Many ideas in functional programming came from Alonzo Church’s Lambda Calculus, which significantly predates anything that looks remotely like a modern computer. Though the actual history of computing runs differently: in the early days of computing, Von Neumann’s ideas were more important than Church’s, and had a tremendous influence on the design of early computers—an influence that continues to the present. Von Neumann’s thinking was essentially imperative: a program is a list of commands that run on a machine designed to execute those commands. 

So, what does it mean to say that functional programming is programming “viewed as mathematics”? Von Neumann was a “mathematician,” and programming of all kinds found its first home in Mathematics departments. So, if functional programming is mathematical, what does that mean? What kind of math?

I’m not thinking of any specific branch of mathematics. Yes, the Lambda Calculus has significant ties to set theory, logic, category theory, and many other branches of mathematics. But let’s start with grade school mathematics and assignment statements; they’re basic to any programming language. We’re all familiar with code like this:

  i = i+1 # or, more simply   i += 1  # or, even more simply   i++     # C, Java, but not Python or Ruby

Mathematically, this is nonsense. An equation is a statement about a relationship that holds true. i can equal i; it can’t equal i+1. And while i++ and i+=1 no longer look like equations, they are equally nonsensical; once you’ve said that i equals something, you can’t say it equals something else. “Variables” don’t change values; they’re immutable.

Immutability is one of the most important principles of functional programming. Once you’ve defined a variable, you can’t change it. (You can create a new one in a different function scope, but that’s a different matter.) Variables, in functional programming, are invariant; and that’s important. You may be wondering “what about loops? How can I write a for loop?” Not only do you have to do without index variables, you can’t modify any of the variables in the loop body. 

Setting aside the (solvable) problem of iteration, there’s no reason you can’t write code in (almost) any non-functional language that has this same effect. Just declare all your variables final or const. In the long run, functional programming is more about a specific kind of discipline than about language features. Programming languages can enforce certain rules, but in just about any modern language it’s possible to follow those rules without language support.

Another important principle of functional programming is that functions are “first class entities.” That is, there are minimal restrictions about where you can use a function. You can also have functions without names, often called “lambdas” (which refers directly to the Lambda Calculus, in which functions were unnamed).  In Python, you can write code like this:

  data.sort(key=lambda r: r[COLUMN])

The “key” is an anonymous function that returns a specific column of an array; that function is then used for sorting. Personally, I’m not overly fond of “anonymous functions”; it’s often clearer to write the anonymous function as a regular, named function. So I might write this:

  def sortbycolumn(r): return r[COLUMN]   data.sort(k=sortbycolumn)

The ability to use functions as arguments to functions gives you a very nice way to implement the “strategy pattern”:

  def squareit(x):   return x*x   def cubeit(x):     return x*x*x def rootit(x):     import math; return math.sqrt(x)   def do_something(strategy, x) ... do_something(cubeit, 42) weird = lambda x : cubeit(rootit(x)) do_something(weird, 42)

I often get the sense that all programmers really want from functional programming is first-class functions and lambdas. Lambdas were added to Python very early on (1.0) but didn’t reach Java until Java 8. 

Another consequence of thinking mathematically (and possibly a more important one) is that functions can’t have side-effects and, given the same arguments, will always return the same value. If a mathematician (or a high school trig student) writes

  y = sin(x)

they don’t have to deal with the possibility that sin(x) sets some global variable to 42, or will return a different value every time it’s called. That just can’t happen; in math, the idea of a “side-effect” is meaningless. All the information that sin(x) provides is encapsulated in the return value. In most programming languages, side-effects happen all too easily, and in some, they’re almost an obsession. Again, creating functions that have no side-effects is a matter of exercising discipline. A programming language can enforce this rule, but you can follow it whether or not your language makes you do it. We don’t have cartoon devils looking over our shoulders saying “Go ahead; make a side effect. No one will notice.”

Functional languages vary the degree to which they enforce the lack of side-effects. If you’re a purist, anything that interacts with the real world is a side-effect. Printing a document? Changing a row in a database? Displaying a value on the user’s screen? Those are all side-effects (they aren’t completely encapsulated in the value returned by the function), and they have to be “hidden” using a mechanism like monads in Haskell. And that’s the point at which many programmers get confused and throw up their hands in despair. (I’ll only point you to Real World Haskell.) In both Java and Python, lambda functions can have side-effects, which means that, strictly speaking, they aren’t really “functional.” Guido van Rossum’s discussion of the addition of Lambdas to Python is worth reading; among other things, he says “I have never considered Python to be heavily influenced by functional languages, no matter what people say or think.”

Streams are often associated with functional languages; they’re essentially long (perhaps infinite) lists that are evaluated lazily—meaning that elements of the string are only evaluated as they’re needed. Maps apply a function to every element of a list, returning a new list—and that includes streams, which (for these purposes) are specialized lists. That’s an incredibly useful feature; it’s a great way to write a loop without having to write a loop—and without even knowing how much data you have. You can also create “filters” that choose whether to pass any element of the stream to the output, and you can chain maps and filters together. If you think this sounds like a Unix pipeline, you’re right. Streams, maps, filters, and the act of chaining them together really have as much to do with the Unix shell as they do with functional languages.

Another way to avoid writing loops is to use “comprehensions,” a feature of Python. It’s easy to get very fond of list comprehensions; they’re compact, they eliminate off-by-one errors, and they’re very flexible. Although comprehensions look like a compact notation for a traditional loop, they really come from set theory—and their closest computational “relatives” are to be found in relational databases, rather than functional programming. Here’s a comprehension that applies a function to every element of a list:

  # pythonic examples.  First, list comprehension   newlist = [ somefunction(thing) for thing in things ]

The most general way to avoid traditional loops is to use recursion: a function that calls itself. Here’s the recursive equivalent to the previous comprehension:

def iterate(t, l) : if len(t) == 0 : return l # stop when all elements are done return iterate(t[1:],l + [somefunction(t[0])]) # process remainder

Recursion is a mainstay of functional languages: you don’t have indices being modified, and you’re not even modifying the resulting list (assuming that append doesn’t count as modification). 

However, recursion has its own problems. It’s hard to wrap your mind around recursion; you still need to do a lot of your own bookkeeping (in this case, passing in a vector so a result can be returned); and except in one (common) special case, called “tail recursion,” it can be a performance nightmare.

I started by saying that functional programming was programming considered as “math,” and that’s at least partially correct. But is that claim useful? There are many branches of mathematics that map onto programming concepts in different ways. Functional programming only represents one of them. If you’re a topologist, you may well like graph databases. But discussing which branch of mathematics corresponds to which programming practices isn’t really helpful. Remembering high school algebra may help when thinking about immutability, statelessness, and the absence of side-effects; but most programmers will never study the real mathematical origins of functional programing. Lambdas are great; functions as arguments in method calls is great; even recursion is (sometimes) great; but we’re fooling ourselves if we think programmers are going to start using Java as if it were Haskell. But that’s OK; for Java programmers, the value of Lambdas isn’t some mathematical notion of “functional,” but in providing a huge improvement over anonymous inner classes. The tools to be functional are there, should you choose to use them.

In college, I learned that engineering was about making tradeoffs. Since then, I’ve heard very few programmers talk about tradeoffs—but those tradeoffs are still central to good engineering. And while engineering uses a lot of mathematics, engineering isn’t mathematics, in part because mathematics doesn’t deal in tradeoffs. Using “mathematics” as a way to think about a particular style of disciplined coding maybe be useful, particularly if that discipline leads to fewer bugs. It’s also useful to use the tools of mathematics to make good tradeoffs between rigor, performance, and practicality—which may lead you in an entirely different direction. Be as functional as you need to (but no more). 

Categories: Technology

Four short links: 4 Dec 2020

O'Reilly Radar - Fri, 2020/12/04 - 13:56
  1. NAND GameYou start with a single component, the nand gate. Using this as the fundamental building block, you will build all other components necessary. (See also NAND to Tetris)
  2. Facebook’s Game AItoday we are unveiling Recursive Belief-based Learning (ReBeL), a general RL+Search algorithm that can work in all two-player zero-sum games, including imperfect-information games. ReBeL builds on the RL+Search algorithms like AlphaZero that have proved successful in perfect-information games. Unlike those previous AIs, however, ReBeL makes decisions by factoring in the probability distribution of different beliefs each player might have about the current state of the game, which we call a public belief state (PBS). In other words, ReBeL can assess the chances that its poker opponent thinks it has, for example, a pair of aces.
  3. In-Database Machine LearningWe demonstrate our claim by implementing tensor algebra and stochastic gradient descent using lambda expressions for loss functions as a pipelined operator in a main memory database system. Our approach enables common machine learning tasks to be performed faster than by extended disk-based database systems or as well as dedicated tools by eliminating the time needed for data extraction. This work aims to incorporate gradient descent and tensor data types into database systems, allowing them to handle a wider range of computational tasks.
  4. Scaling Datastores at Slack with VitessVitess is YouTube’s MySQL horizontal-scaling solution. This article is a really good write-up of what they were doing, why it didn’t work, how they tested the waters with Vitess, and how it’s working for them so far.
Categories: Technology

Four short links: 1 Dec 2020

O'Reilly Radar - Tue, 2020/12/01 - 13:54
  1. AlphaFold — This is astonishing: protein-folding solved by Google’s DeepMind. Figuring out what shapes proteins fold into is known as the “protein folding problem”, and has stood as a grand challenge in biology for the past 50 years. In a major scientific advance, the latest version of our AI system AlphaFold has been recognised as a solution to this grand challenge by the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP). And from Science: The organizers even worried DeepMind may have been cheating somehow. So Lupas set a special challenge: a membrane protein from a species of archaea, an ancient group of microbes. For 10 years, his research team tried every trick in the book to get an x-ray crystal structure of the protein. “We couldn’t solve it.” But AlphaFold had no trouble. It returned a detailed image of a three-part protein with two long helical arms in the middle. The model enabled Lupas and his colleagues to make sense of their x-ray data; within half an hour, they had fit their experimental results to AlphaFold’s predicted structure. “It’s almost perfect,” Lupas says. “They could not possibly have cheated on this. I don’t know how they do it.” Far more useful (and to me, more impressive) than AlphaGo.
  2. Purpose-First Programming — Some students resist the cognitively-heavy tasks of simulating program execution. The secret to teaching those folks to program may be “purpose-first programming”: She used Github repositories and expert interviews to identify a few programming plans (just like Elliot Soloway and Jim Spohrer studied years ago) that were in common use in a domain that her participants cared about. She then taught those plans. Students modified and combined the plans to create programs that the students found useful. Rather than start with syntax or semantics, she started with the program’s purpose. Very reminiscent of the late 90s Perl and PHP copy-and-change coding boom that got orders of magnitude more people programming than were coming through CS courses at the time.
  3. Conversations with The Year 2000 — Paul Ford is a genius.
    ’00: How does HTML work now?
    ’20: It’s pretty simple, you define app logic as unidirectional dataflow, then fake up pseudo-HTML components that mirror state, and a controller mounts fake-page deltas onto the browser surface.
    ’00: How do you change the title?
    ’20: You can’t.
  4. cube3d.dna — A raytracer implemented in DNA. How to deploy: (1) Synthesize the oligonucleotides from the cube3d.dna file. (2) Arrange the test tubes as shown in the diagram below. (3) Don’t forget to provide the initial concentrations according to the table below. (4) Use a pipette to encode the position (row and column) of each tube to start the computation.
Categories: Technology

Radar trends to watch: December 2020

O'Reilly Radar - Tue, 2020/12/01 - 10:33

This month’s collection of interesting articles that point to important trends is dominated by AI. That’s not surprising; AI has probably been the biggest single category all year. But its dominance over other topics seems to be increasing. That’s partly because there’s more research into why AI fails; partly because we’re beginning to see AI in embedded systems, ranging from giant gas and oil wells to the tiny devices that Pete Warden is working with.

Artificial Intelligence
  • Teaching AI to manipulate and persuade: Combine NLP with reinforcement learning, and train in a multiplayer role-playing game. This is where AI gets scary, particularly since AI systems don’t understand what they’re doing (see the next item).
  • GPT-3 is great at producing human-like language, but that’s as far as it goes; it has no sense of what an appropriate response to any prompt might be. For example, suggesting suicide as a solution to depression. This isn’t a surprise, but it means that GPT-3 really can’t be incorporated into applications.
  • Why machine learning models fail in the real world, and why it’s a very difficult problem to fix: Any set of training data can lead to a huge number of models with similar behavior on the training data, but with very different performance on real-world data. Deciding which of these models is “best” (and in which situations) is a difficult, and unstudied, problem.
  • Tiny NAS: Neural Architecture Search designed to automate building Tiny Neural Networks. Machine Learning on small devices will be an increasingly important topic in the coming years.
  • Pete Warden on the future of TinyML: There will be hundreds of billions of devices in the next few years. Many of them won’t be “smart”; they’ll be more intelligent versions of dumb devices. We don’t need “smart refrigerators” that can order milk automatically, but we do need refrigerators that can use energy more efficiently and notify us when they’re about to fail.
  • The replication crisis in AI: Too many academic AI papers are published without code or data, and using hardware that can’t be obtained by other researchers. Without access to code, data, and hardware, academic papers about groundbreaking results are little more than corporate marketing.
  • Machine learning to detect gas leaks: Granted, this is for oil-well scale natural gas leaks, but we should all be more aware of these invisible applications of machine learning. It’s not just autonomous vehicles and face recognition. And lest we forget, invisible applications of ML also have problems with bias, fairness, and accountability.
  • Vokens: What happens when you combine computer vision with natural language processing? Is it possible to isolate the meaningful elements in a picture, then use that to inform language models like GPT-3 to add an element of “common sense”?
  • Using AI to diagnose COVID-19 via coughs: MIT has developed an AI algorithm that detects features in a cough that indicate a COVID-19 infection. It is at least as accurate as current tests, particularly for asymptomatic people, provides results in real time, and could easily be built into a cell phone app.
  • Over time, models in feedback loops (e.g., economic competition) tend to become more accurate for a narrower slice of the population, and less accurate for the population as a whole. Essentially, a model that is constantly retraining on current input will, over time, make itself unfair.
  • Robots in construction: The construction industry has been resistant to automation. Canvas has built a robot that installs drywall. This robot is in use on several major sites, including the renovation of the Harvey Milk terminal at San Francisco Airport.
  • Simplifying the robot’s model of the external world is the route to better collaborations between robots and humans.
  • Honda wins approval to sell a level-3 autonomous vehicle. The vehicle is capable of completely taking over driving in certain situations, not just assisting. It should be on sale before March.
  • Nbdev is a literate programming environment for Python. It is based on Jupyter, but encompasses the entire software lifecycle and CI/CD pipeline, not just programming.
  • A visual programming environment for GraphQL is another step in getting beyond text-based programming. A visual environment seems like an obvious choice for working with graph data.
  • PHP 8 is out!  PHP is an old language, and this release isn’t likely to put it onto the “trendy language” list.  But with a huge portion of the Web built with PHP, this new release is important and definitely worth watching.
Privacy and Security
  • Google is adding end-to-end encryption to their implementation of RCS, which is a standard designed to replace SMS messaging. RCS hasn’t been adopted widely (and, given the dominance of the telephone system, may never be adopted widely), but standards for encrypted messaging are an important step forward.
  • Tim Berners-Lee’s privacy project, Solid, has released its first project: an organizational privacy server. The idea behind Solid is that people (and organizations) store their own data in secure repositories called Pods that they control. Bruce Schneier has joined Inrupt, the company commercializing Solid.
  • CMU has shown that passwords with minimum length of 12 characters and that pass some simple tests can be remembered and resist attack. We can move on from password policies that require obscure combinations of upper and lowercase, punctuation and numerals, and that don’t require changing passwords regularly.
  • Remember DNS cache poisoning? It’s back. Unfortunately.
  • A public mesh WiFi network for New York City: Mesh networks can provide Internet access in locations where established providers don’t care to go–but making them work at scale is difficult. Technology we first heard about in Cory Doctorow’s very strange Someone Comes To Town, Someone Leaves Town.
  • Hyper-scale indexing: Helios is Microsoft’s reference architecture for the next generation of cloud systems. It is capable of handling extremely large data sets (even by modern standards) and combines centralized cloud computing with edge computing.
  • The Raspberry Pi 400 looks like a LOT of fun. It’s Raspberry Pi 4 built into a keyboard (like the very early Personal Computers); 1.8 GHz ARM processor, 4 GB RAM, more I/O ports than a MacBook Pro; just needs a monitor. I just hope the keyboard is good.
  • I should say something positive about Apple’s M1, but I won’t. I’m disenchanted enough with them as a company that I really don’t care how good the processor is.
  • Amazon reviews about scented candles that don’t smell correlate to Covid. A nice application of data analysis using publicly available sources. Data science wins.
Categories: Technology

Four short links: 27 Nov 2020

O'Reilly Radar - Fri, 2020/11/27 - 13:51

  1. Brian Kernighan Interviews Ken Thompson — From a fun interview: McIlroy keeps coming up. He’s the smartest of all of us and the least remembered (or written down)… McIlroy sat there and wrote —on a piece of paper, now, not on a computer— TMG [a proprietary yacc-like program] written in TMG… And then! He now has TMG written in TMG, he decided to give his piece of paper to his piece of paper and write down what came out (the code). Which he did. And then he came over to my editor and he typed in his code, assembled it, and (I won’t say without error, but with so few errors you’d be astonished) he came up with a TMG compiler, on the PDP-7, written in TMG. And it’s the most basic, bare, impressive self-compilation I’ve ever seen in my life. (via Hacker News)
  2. ROS World 2020 Videosall of the ROS World videos, including all the lightning talks. ROS = Robot Operating System.
  3. Learning from Languagewe propose a simple approach called Language Shaped Learning (LSL): if we have access to explanations at training time, we encourage the model to learn representations that are not only helpful for classification, but are predictive of the language explanations. (Paper)
  4. Easy Theory — YouTube lectures on computer science theory. Mondays: Algorithms; Wednesdays: Theory of Computation; Fridays: Theory of Computation; Sundays: Livestream/bonus.

Categories: Technology

Four short links: 24 Nov 2020

O'Reilly Radar - Tue, 2020/11/24 - 05:44
  1. OpenStreetMap is Having a MomentApple was responsible for more edits in 2019 than Mapbox accounted for in its entire corporate history. See also the 2020: Curious Cases of Corporations in OpenStreetMap talk from State of the Map. (via Simon Willison)
  2. Drone Warfare — The second point, “SkyNet”, is the interesting bit. Azerbaijan and Armenia fought a war and drones enabled some very asymmetric outcomes. Quoting a Washington Post story, Azerbaijan, frustrated at a peace process that it felt delivered nothing, used its Caspian Sea oil wealth to buy arms, including a fleet of Turkish Bayraktar TB2 drones and Israeli kamikaze drones (also called loitering munitions, designed to hover in an area before diving on a target). […] Azerbaijan used surveillance drones to spot targets and sent armed drones or kamikaze drones to destroy them, analysts said. […] Their tally, which logs confirmed losses with photographs or videos, listed Armenian losses at 185 T-72 tanks; 90 armored fighting vehicles; 182 artillery pieces; 73 multiple rocket launchers; 26 surface-to-air missile systems, including a Tor system and five S-300s; 14 radars or jammers; one SU-25 war plane; four drones and 451 military vehicles. (via John Birmingham)
  3. Peregrinean efficient, single-machine system for performing data mining tasks on large graphs. Some graph mining applications include: Finding frequent subgraphs; Generating the motif/graphlet distribution; Finding all occurrences of a subgraph. Peregrine is highly programmable, so you can easily develop your own graph mining applications using its novel, declarative, graph-pattern-centric API. To write a Peregrine program, you describe which graph patterns you are interested in mining, and what to do with each occurrence of those patterns. You provide the what and the runtime handles the how.
  4. Declining Marginal Returns of Researchers — (Tamay Besiroglu) I found that the marginal returns of researchers are rapidly declining. There is what’s called a “standing on toes” effect: researcher productivity declines as the field grows. Because ML has recently grown very quickly, this makes better ML models much harder to find. (Dissertation)
Categories: Technology

Four short links: 20 Nov 2020

O'Reilly Radar - Fri, 2020/11/20 - 05:25
  1. eprTerminal/CLI Epub reader.
  2. I Should Have Loved Biology — Conveys well the magic of the field. Notable also for the reference to A Computer Scientist’s Guide to Cell Biology, which I didn’t realise existed.
  3. Ur-Technical Debt — Reviving Ward Cunningham’s take on technical debt. Simply put, ur-technical debt arises when my ideas diverge from my code. That divergence is inevitable with an iterative process. […] “[I]f you develop a program for a long period of time by only adding features and never reorganizing it to reflect your understanding of those features, then eventually that program simply does not contain any understanding and all efforts to work on it take longer and longer.”
  4. Directusa real-time [REST and GraphQL] API and App dashboard for managing SQL database content.
Categories: Technology

On Exactitude in Technical Debt

O'Reilly Radar - Tue, 2020/11/17 - 05:23

If software is such stuff as dreams are made on, how do we talk about nightmares? Software is not the tangible, kickable stuff our senses are tuned to, so we draw on metaphor to communicate and reason about it.

The 1970s offered up spaghetti code to describe the tangle of unstructured control flow. This has inspired many software-as-pasta descriptions, from lasagne for layered architectures to ravioli for—pick a decade—objects, components, modules, services, and microservices. Beyond its disordered arrangement, however, spaghetti has little to offer us as a metaphor. It doesn’t provide us with a useful mental model for talking about code, and has far too many positive associations. If you love both ravioli and spaghetti, it’s not obvious that one of these is worse for your software architecture than the other.

A metaphor is a mapping that we use to describe one thing in terms of another—sometimes because we want to show something familiar from an unfamiliar angle, as in poetry, but sometimes because we want to show something unfamiliar or abstract in a more familiar light, as in software. To be considered good, a metaphor has to offer a number of points of useful correspondence with what is being described. Pasta doesn’t quite do this.

Another quality of a good metaphor is that it should not have too many obvious points of conflict. It will never map its target perfectly—a metaphor is a conceit not an identity—but a good metaphor is one whose key qualities don’t contradict the very thing we are trying to say, whose points of difference don’t distract from the mental model being shared.

We sometimes talk about code decay and software rot. These terms give a sense of degradation over time. This seems accurate and relatable. They also suggest a response: cleaning (we brush our teeth to reduce the chance of tooth decay) or treatment (we treat wood to avoid it rotting). So far so good… but the problem with these metaphors is they refer to natural processes that happen independently of anything we do. If you don’t brush your teeth, you will experience decay. If you don’t touch code, it doesn’t intrinsically degrade.

The third quality of a metaphor that makes it effective is familiarity to its audience. Explaining something unfamiliar in terms of something else that is also unfamiliar can be a long road to travel a short distance (or to end up where you started). If you are familiar with the concept of entropy in statistical mechanics, with the second law of thermodynamics, and with the idea that work is needed to reduce entropy and increase order in a system, then software entropy might strike you as a descriptive metaphor—and not simply because the word work transfers happily from the world of thermodynamics to the day-to-day experience of developers. If, however, these concepts are not accessible and require explanation, then, regardless of its other merits, software entropy may not be the best way to talk about accidental complexity in code.

Perhaps the most popular metaphor in use is based on financial debt, originating with Ward Cunningham in 1992. As Martin Fowler described in 2003:

Technical Debt is a wonderful metaphor developed by Ward Cunningham to help us think about this problem. In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice.

When we look at technical debt, we see a metaphor that checks all three boxes: it has a number of useful points of correspondence; the points of difference don’t overwhelm the core idea; it is familiar. Furthermore, it brings with it a useful working vocabulary. For example, consider what the following debt-related words suggest to you in a software context: repayment, consolidation, creditworthiness, write-off, borrowing.

Although we know that by definition no metaphor is perfect, there are two common ways in which the metaphor is misapplied: assuming technical debt is necessarily something bad; equating technical debt with a financial debt value. The emphasis of the former is misaligned and the latter is a category error.

If we are relying on the common experience of our audience, financial debt is almost always thought of as a burden. If we take that together with the common experience of code quality and nudge it with leading descriptions such as “quick and dirty,” it is easy to see how in everyday use technical debt has become synonymous with poor code and poor practice. We are, however, drawing too heavily on the wrong connotation.

Rather than reckless debt, such as from gambling, we should be thinking more along the lines of prudent debt, such as a mortgage. A mortgage should be offered based on our credit history and our ability to pay and, in return, we are able to buy a house that might otherwise have been beyond our reach. Similarly, Ward’s original motivation was to highlight how debt in code can be used for competitive advantage:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite.

This comes with a clear caveat and implication: a debt is a loan. A debt is for repayment, not for running up:

The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.

As in the real world, how we run up debt and how we manage it turn out to be more complex than the simplicity of our best intentions. There are teams that make time-saving decisions wisely, revisiting and addressing them later in a timely manner. But in most cases where debt is incurred, discussed, and lamented, codebases reflect the firefight of different priorities, skills, and people. It’s still technical debt, but it lacks the prudence and intention of Ward’s original purpose.

There are also teams and tools that embrace the debt metaphor so tightly that they forget it’s a metaphor. They treat it literally and numerically, converting code quality into a currency value on a spreadsheet or dashboard. The consequences of this thinko range from being a harmless fiction largely ignored by developers and managers to a more damaging numerology that, even though it’s well intentioned, can mislead development effort.

If we’re going to quantify it, what is it we’re quantifying? Do we list off code smells? What is the debt value of a code smell? Is it constant per kind of code smell? For example, is duplicate code characterised by a single cost? And are code smells independent of one another? Consider that, for example, duplication is sometimes used to reduce coupling, so the debit becomes a credit in that context. We can conclude that a code smell is not an isolated thing with a single look-up debt value, so this is clearly a more complex problem dependent on many factors. As a multivariable problem, what does it depend on? And how? And how do we know? And what would the value or—more likely—value distribution reflect? The cost of fixing? Or, more honestly, an estimate of the cost of fixing?

But even if we are somehow able to conjure a number out of this ever-growing list of considerations—and even if that number has some relation to observed reality—we have put a number to the wrong quantity. We have, in fact, missed the whole point of the metaphor.

Technical debt is not the cost of repaying the debt: it is the cost of owning the debt. These are not the same. That is the message of the technical debt metaphor: it is not simply a measure of the specific work needed to repay the debt; it is the additional time and effort added to all past, present, and future work that comes from having the debt in the first place.

By taking the metaphor literally, we have robbed it of its value. Its value is to offer us a figure of speech not of currency, a mental model for talking and reasoning about qualities of our code that are not simply stated in code. No matter how well meant, pushing any metaphor beyond its applicability leads to metaphor shear. It is, after all, metaphor and not identity.

Categories: Technology

Four short links: 17 Nov 2020

O'Reilly Radar - Tue, 2020/11/17 - 05:21
  1. NDSS Symposium 2020 Papers — Large pile of security research from the 2020 Network and Distributed System Security Symposium, including papers on topics as wide-reaching as hypervisor fuzzing and The Attack of the Clones Against Proof-of-Authority which sounds like a very niche Star Wars sequel indeed.
  2. Liquid Information Flow ControlWe present Lifty, a domain-specific language for data-centric applications that manipulate sensitive data. A Lifty programmer annotates the sources of sensitive data with declarative security policies, and the language statically and automatically verifies that the application handles the data according to the policies. Moreover, if verification fails, Lifty suggests a provably correct repair, thereby easing the programmer burden of implementing policy enforcing code throughout the application.
  3. So You’ve Made a Mistake and It’s Public — Wikipedians’ excellent advice for what to do when you’ve been busted making a mistake.
  4. GraphQL EditorCreate a schema by using visual blocks system. GraphQL Editor will transform them into code.
Categories: Technology

Four short links: 13 Nov 2020

O'Reilly Radar - Fri, 2020/11/13 - 05:20
  1. Advanced System on a Chip Lecture Notes (2016) — Topics: 1. Basic Processor & Memory hierarchy; 2. Advanced Out-of-Order Processor; 3. Data-parallel processors; 4. Micro-controller introduction; 5. Multicore; 6. RISC-V core; 7. Advanced Multicore; 8. Multicore programming; 9. Graphics Processing Unit (GPU); 10. Heterogeneous SoC; 11. GPU Programming; 12. Application-Specific Instruction-Set Processor (ASIP); 13 PULP: Parallel Ultra-Low-Power Computing; 14. Architecture in the Future – Wrap-up (via Hacker News).
  2. FlixNext-generation reliable, safe, concise, and functional-first programming language.
    Flix is a principled and flexible functional-, logic-, and imperative- programming language that takes inspiration from F#, Go, OCaml, Haskell, Rust, and Scala. Flix looks like Scala, but its type system is closer to that of OCaml and Haskell. Its concurrency model is inspired by Go-style processes and channels. Flix compiles to JVM bytecode, runs on the Java Virtual Machine, and supports full tail call elimination.
    And supports first-class Datalog constraints enriched with lattice semantics.
  3. 20 Megatrends for the 2020sAbundance, connectivity, healthspan, capital, AR and Spatial Web, smart devices, human-level AI, AI-Human collaboration, software shells, renewable energy, insurance industry switches to prevention, autonomous vehicles and flying cars, on-demand production and delivery, knowledge, advertising, cellular agriculture, brain-computer interfaces, VR, sustainability/environment, and CRISPR. Even if you don’t believe these are the trends of the future, it’s worth knowing what your customers/partners are being told.
  4. Credential ManagementLevel -2: No Authentication; Level -1: All Passwords = “password”; Level 0: Hardcode Everywhere; Level +1: Move Secrets into a Config File; Level +2: Encrypt the Config File; Level +3: Use a Secret Manager; Level +4: Dynamic Ephemeral Credentials.
Categories: Technology

Multi-Paradigm Languages

O'Reilly Radar - Tue, 2020/11/10 - 06:29

The programming world used to be split into functional languages, object-oriented languages, and everything else (mostly procedural languages). One “was” a functional programmer (at least as a hobby) writing Lisp, Haskell, or Erlang; or one “was” an OO programmer (at least professionally), writing code in Java or C++.  (One never called oneself a “procedural programmer”; when these names escaped from academia in the 1990s, calling yourself a “procedural programmer” would be akin to wearing wide ties and bell-bottom jeans.)

But this world has been changing. Over the past two decades, we’ve seen the rise of hybrid programming languages that combine both functional and object-oriented features. Some of these languages (like Scala) were multi-paradigm from the beginning. Others, like Python (in the transition from Python 2 to 3) or Java (with the introduction of Lambdas in Java 8) are object-oriented or procedural languages to which functional features were added. Although we think of C++ as an object-oriented language, it has also been multi-paradigm from the beginning. It started with C, a procedural language, and added object-oriented features. Later, beginning with the Standard Template Library, C++ was influenced by many ideas from Scheme, a descendant of LISP.  JavaScript was also heavily influenced by Scheme, and popularized the idea of anonymous functions and functions as first class objects. And JavaScript was object-oriented from the start, with a prototype-based object model and syntax (though not semantics) that gradually evolved to become similar to Java’s.

We’ve also seen the rise of languages combining static and dynamic typing (TypeScript in the JavaScript world; the addition of optional type hinting in Python 3.5; Rust has some limited dynamic typing features). Typing is another dimension in paradigm space. Dynamic typing leads to languages that make programming fun and where it’s easy to be productive, while strict typing makes it significantly easier to build, understand, and debug large systems. It’s always been easy to find people praising dynamic languages, but, except for a few years in the late 00s, the dynamic-static paradigmatic hasn’t attracted as much attention.

Why do we still see holy wars between advocates of functional and object-oriented programming? That strikes me as a huge missed opportunity. What might “multi-paradigm programming” mean? What would it mean to reject purity and use whatever set of features provide the best solution in any given context? Most significant software is substantial enough that it certainly has components where an object-oriented paradigm makes more sense, and components where a functional paradigm is superior.  For example, look at a “functional” feature like recursion.  There are certainly algorithms that make much more sense recursively (Towers of Hanoi, or printing a sorted binary tree in order); there are algorithms where it doesn’t make much of a difference whether you use loops or recursion (whenever tail recursion optimizations will work); and there are certainly cases where recursion will be slow and memory-hungry. How many programmers know which solution is best in any situation?

These are the sort of questions we need to start asking. Design patterns have been associated with object-oriented programming from the beginning. What kinds of design patterns make sense in a multi-paradigm world? Remember that design patterns aren’t “invented”; they’re observed, they’re solutions to problems that show up again and again, and that should become part of your repertoire. It’s unfortunate that functional programmers tend not to talk about design patterns; when you realize that patterns are observed solutions, statements like “patterns aren’t needed in functional languages” cease to make sense. Functional programmers certainly solve problems, and certainly see the same solutions show up repeatedly. We shouldn’t expect those problems and solutions to be the same problems and solutions that OO programmers observe. What patterns yield the best of both paradigms? What patterns might help to determine which approach is most appropriate in a given situation?

Programming languages represent ways of thinking about problems. Over the years, the paradigms have multiplied, along with the problems we’re interested in solving. We now talk about event-driven programming, and many software systems are event-driven, at least on the front end. Metaprogramming was popularized by JUnit, the first widely used tool to rely on this feature that’s more often associated with functional languages; since then, several drastically different versions of metaprogramming have made new things possible in Java, Ruby, and other languages.

We’ve never really addressed the problem of how to make these paradigms play well together; so far, languages that support multiple paradigms have left it to the programmers to figure out how to use them. But simply mixing paradigms ad hoc probably isn’t the ideal way to build large systems–and we’re now building software at scales and speeds that were hard to imagine only a few years ago. Our tools have improved; now we need to learn how to use them well. And that will inevitably involve blending paradigms that we’ve long viewed as distinct, or even in conflict.

Thanks to Kevlin Henney for ideas and suggestions!

Categories: Technology

Four short links: 10 November 2020

O'Reilly Radar - Tue, 2020/11/10 - 05:13
  1. Hypothesis as LiabilityWould the mental focus on a specific hypothesis prevent us from making a discovery? To test this, we made up a dataset and asked students to analyze it. […] The most notable “discovery” in the dataset was that if you simply plotted the number of steps versus the BMI, you would see an image of a gorilla waving at you (Fig. 1b).
  2. Tesla Engineering Inside Goss — Lots and lots of inside engineering horror stories (2 years old by now). my issue was the fact that the systems doing the flashing were running the yocto images and perl and the guy writing the perl was also responsible for writing the thing that actually updates the car. that thing (the car-side updater) is about ~100k lines of C in a single file. code reviews were always a laugh riot.
  3. Teach Testing First — An extremely good idea. Testers and security specialists have a different mindset to regular programmers: they look to pervert and break the software, not simply to find the golden path whereby it produces the right behaviour for the right inputs. Perhaps if more people learned testing first, we’d end up with more secure software.
  4. Realistic and Interactive Robotic Gaze — Astonishingly creepy prototype with astonishingly life-like eyeballs. Great work from Disney Research. (Paper)
Categories: Technology

Topics for the Virtual Meeting on 11/12

PLUG - Mon, 2020/11/09 - 10:54
We'll have 2 presentations this month, QubesOS: a toolkit for secure applications and Space Night Talk Show

Attend the meeting on Thursday November 12th at 7PM by visiting:

Kevin O'Gorman: QubesOS: a toolkit for secure applications

SecureDrop is a whistleblowing system designed to protect the anonymity of sources against highly-capable adversaries. It's also not exactly user-friendly. FPF is building a next-generation SecureDrop workstation for journalists using the Qubes OS ( as a base. This presentation covers some of the properties of Qubes that make the workstation possible while preserving the level of security that SecureDrop has historically provided.

About Kevin:
Kevin is a Newsroom Support Engineer at the Freedom of the Press Foundation, based in Toronto, Canada. His involvement in digital security stems from his time spent working in various roles with media organizations including the CBC and The Globe And Mail, where he led security workshops for journalists and worked with FPF to implement the first Canadian SecureDrop instance.

Nathan Cluff: Space Night Talk Show

PLUG is hosting it's first ever talk show. Tune in as Hans asks guest Nathan Cluff about FLOSS in space and other geeky topics.

der.hans will interview Nathan Cluff for PLUG's first ever talk show as they talk about throwing things at other planets, our first spacecopter and whatever else comes up

First Space Night video:
Planetary Society:

About Nathan:
Nathan is the Lead Systems Administrator for the Mastcam and Mastcam-Z cameras on the Mars Science Laboratory and Mars 2020 rovers in addition to supporting operations for various other missions such as the Luna Polar Hydrogen Mapper (LunaH-map) mission. Nathan has been involved in various Linux administrative positions for the last 18 years and has been in the School of Earth and Space Exploration at ASU for the last 4 years.

Four short links: 6 Nov 2020

O'Reilly Radar - Fri, 2020/11/06 - 04:59
  1. Dealing with Security Holes in Chipssystem security starts at the hardware layer.
  2. Ubooquityfree home server for your comics and ebooks library. “Like plex for books.”
  3. Noisepagea relational database management system developed by the Carnegie Mellon Database Group. The research goal of the NoisePage project is to develop high-performance system components that support autonomous operation and optimization as a first-class design principle. Also interesting in databases this week: a rundown on Procella, YouTube’s analytical database.
  4. Technical Debt — Where I first found this excellent description of technical debt, by Ward Cunningham: “If you develop a program for a long period of time by only adding features but never reorganizing it to reflect your understanding of those features, then eventually that program simply does not contain any understanding and all efforts to work on it take longer and longer.”
Categories: Technology


Subscribe to LuftHans aggregator