Lucas Gonze

Reply to posts using GitHub (, email (, or the guestbook (

Open Musicology: William Litten

I have pushed a new music transcription project to Github.

This repository contains my digital transcription of a song in a book titled "William Litten's Fiddle Tunes: 1800-1802." The book was published in 1977 in Vineyard Haven, Massachusetts, by Hines Point Publishers. As far as I can tell the original had only a single run, maybe self-published by a company that only produced this one title. I came across it in the back stacks of a sheet music store in Boston ("The Beehive: Jazz Hive - No Jive!") around 1982.

Since then my copy has become badly worn. There seems to be no alternative to it. It's out of print, so I can't buy a new copy. There appear to be no digitized scans, though several academic libraries appear to have listings of hard copy versions. Letting this disappear seems wrong to me. This knowledge and these sounds shouldn't be lost.

Huntington's publication of Litten's hand-written original kept it alive for me. I hope that my digital version here will do the same for others.


Litten was a sailor whose duties included fiddling. He kept a notebook of tunes. His notebook wound up in a historical archive in a library in Nantucket, Massachusetts. The researcher who compiled the Hines Book book was Gale Huntington.

I have found one real reference to the book, in a blog called Vineyard Vistor:

In 1800, a ship’s fiddler named William Litten sailed with the British India fleet. On Tuesday, the Flying Elbows will perform some of his tunes at the Martha’s Vineyard Museum as part of a concert series that showcases Island music and its history. The ship fiddler’s job was to play rhythmic tunes to accompany the sailors’ work, sparing them the need to sing undignified sea chanteys to keep in time together. Unlike many ship’s fiddlers, Litten was musically literate, and wrote down the tunes he played in his journal. Allen Coffin of Edgartown acquired Litten’s journal and brought it home with him. It’s possible that the two men sailed together in the British Navy, or even played music together, but all of that is speculation. Eventually, Litten’s logbook landed in the collection of the Dukes County Historical Society, the organization which is now known as the Martha’s Vineyard Museum, only to be discovered by another musician over a century later.

Gale Huntington was the founding editor of the Dukes County Intelligencer (now the MVM Quarterly) from 1959 to 1977. He taught Latin and history at the high school here, and was also a fisherman, musician, folksinger, and collector of sea chanteys. He discovered Litten’s manuscript in the historical society’s library, and copied the tunes in his own clear and legible handwriting. The resulting book, “William Litten’s Fiddle Tunes, 1800 – 1802,” includes historical notes on the tunes, as well as the music itself.

Out of the many songs in this book that I have played through, this one is easily the best. Its structure is uneven and varied without being crooked. Litten pulls phrases across bar lines to the point where the meter is almost 6/4 rather than 4/4, then pads the beat counts to maintain a danceable cadence. He has a constant stream of ideas: he never takes a motif and morphs it through a conventional series of transformations. Because there is so much detail I suspect this is an original composition.

It's not hard to play. What's hard is absorbing the quirks and irregularities. It's like a hand-made woodcut. Everything feels natural to the ear, yet no curve is perfectly round, no line is perfectly straight, no pattern repeats, nothing is predictable.

Regarding the title of the song, I never play this music without wondering who Litten was thinking of. As a working seaman, he was on board with people enslaved into the galley. Did he meet someone's eyes?

Playing It Yourself

The generated directory contains auto-generated versions of that source in visual formats for playing, audio formats for previewing, and MIDI for remixing. The formats include JPEG, PNG, SVG, FLAC, MIDI, MP3, MusicXML, PDF, and WAV.


To modify my source file, grab TheGalleySlave.mscz. Some useful things you might do:

  • Add chord symbols
  • Create tablature for guitar, banjo, mandolin, etc
  • Reformat the graphical output to fit on a phone

I created this transcription using a music notation program called MuseScore 3. It is free software under the terms of the GPL Version 3. MuseScore is on Github at and on the open web at Its repo description is:

I love hearing what other people create based on my work - post an issue to let me know.

Have No Profit Motive

Make things that are good in themselves. Stories that move the reader. Software that is immediately useful. Songs that are loved.

Don't have a revenue model. For money, get a job.

Make open source that is 100% open. Be an absolutist for value. Hold nothing back.

Don't identify a scalable opportunity. The world doesn't need another Facebook / Apple / Amazon / Google / Microsoft. They are bad enough.

Don't even think about raising money. You don't want this. Everything venture capital touches dies. Fund your expressive creations by working. Make things that are cashflow positive from the beginning. Avoid creative projects that cost a lot of money.

Don't make a startup. It is a waste of your precious time on earth to invent projected revenue and expenses. If you are going to stay up until midnight, don't be working on a cap table. Write fiction, not fictions.

Startups are not the way forward. They are a solution to a very small and limited set of problems. Cramming otherwise good ideas into the startup mold will almost always break them. Startups were a fresh framework during the golden age of tech utopianism, when 22-year-olds dreamed of making new social networking sites instead of cool bands. Those days are over. Nobody wants you to find competitive advantage, build a moat around your company, identify a defensible niche. Those things might be good for you, but everybody else would be worse off.

Make wonderful things for their own sake. Put your whole self into that one goal. Do not be distracted.

Will one thing:

When a woman makes an altar cloth, so far as she is able, she makes every flower as lovely as the graceful flowers of the field, as far as she is able, every star as sparkling as the glistening stars of the night. She withholds nothing, but uses the most precious things she possesses. She sells off every other claim upon her life that she may purchase the most uninterrupted and favorable time of the day and night for her one and only, for her beloved work. But when the cloth is finished and put to its sacred use: then she is deeply distressed if someone should make the mistake of looking at her art, instead of at the meaning of the cloth; or make the mistake of looking at a defect, instead of at the meaning of the cloth.

Patterns in Open Source License Compliance

I have been investigating problems incorporating third-party sources into proprietary code bases. The goal is to help companies follow the rules when they work with open source.

I'm looking at snippets, not standalone packages tidily encapsulated in their own directories. This kind of copying is free range and messy. It's everywhere. Copy-pasta is just how developers do their jobs. They snarf code anywhere and everywhere.

That means compliance problems are super common. Managers mainly aren't aware - without tooling they don't have much visibility into which code came from where.

Developers need training in how to incorporate third party code, including Stack Overflow, open source repos with licenses, and sources without license statements. I imagine most of them just don't know how to handle these situations.

Developers need to start caring about compliance problems in third party code. If failures cost them and successes benefitted them they would do the work. CIs should run bots (like Github Actions) to recognize snippets from third party sources and flag it for examination by code reviewers. Reviewers need to know how to evaluate the reports.

Engineering management needs to drive these changes. They need to check whether job candidates know how to comply with copyright. They need to train developers in good practices. They need to ask for compliance checking in pull requests.

These aren't hard problems. They aren't high tech. However they are very large scale: adoption of new practices across the software industry is not a small project.

Pinning Github Actions Project Report

I woke up today to find that a particularly troublesome pull request was merged at last. My PR hardened build security by pinning Github Actions, using a hash to identify the version we intended to use rather than a tagged release.

The idea is that the maintainer of an action can easily inject malicious code into releases that have adoption already. It doesn't matter that the commit labeled v1.2.3 was originally benign if users of their Action are simply adopting any commit with that label. What users of the action must do instead is identify the version they want using the commit hash of the tagged release.

The need to pin actions is widely acknowledged in the security community. It is checked in OSSF Security Scorecards and documented in Github's own security suggestions.

A compromise of a single action within a workflow can be very significant, as that compromised action would have access to all secrets configured on your repository, and may be able to use the GITHUB_TOKEN to write to the repository. Consequently, there is significant risk in sourcing actions from third-party repositories on GitHub.

Despite the obvious need, I came to believe that it is more of a piety than a reality. There were obstacles for which there would be solutions if more people were doing it, and when I asked questions in social media I heard nothing but crickets. I even had to write documentation for Magma contributors.

There was a need for package management tools akin to Yarn, but targeted at Github Actions. I discovered only a single resource in the space, a CLI tool called pin-github-action. This is a modest project with 25 stars, one contributor, and 1090 lines of code, but it occupies a unique and valuable niche. Without it my own deliverable would have been even more work, so I stopped baking my own scripts and became a contributor. I have had 3 features merged and have 2 left to complete. In the long run this project needs a feature set comparable to lockfiles in NPM or the Golang package manager. In addition to new features for managing hashes, the hashes themselves shouldn't be exposed directly to developers. They should exist in a lockfile generated by tooling.

It was the need for tooling and documentation that made Action pinning in Magma so much work. It shouldn't be this way. In the future the open-source security community should converge to bring the infrastructure to maturity.

Open Questions about OSS License Headers

Following up on my blog entry Systemic Improvements to License Preambles , I found that many of my ideas are addressed already in the REUSE guidelines and the SPDX documentation on short identifiers. I was familiar with the SPDX work already, but I didn't focus on it.

On thinking and reading more - this time with explicit awareness of the prior art - some interesting questions remained.

  1. How to contact the developers or copyright holders when there is a need for extended permissions? For example, commercial use of code with a non-commercial license. Right now the email provided with the copyright statement is the default method.

  2. How can I set or get a complete list of copyright holders? The copyright statement in the header only applies to the creator of the project. Each subsequent contributor has their own copyright. Who are they? Were they acting in a personal capacity or did their employer own the rights? If their employer owned the rights, who was it? How can a contributor supply both their own name and their employer? Right now the best approach would be to mine git history for committer identities.

  3. Does the file mix code from different sources, with different copyright holders and license selections? This is particularly relevant to snippets pasted from sources like StackOverflow. There should be a way to mark the beginnings and endings, source, copyright holders, and permissions. Snippets are probably the biggest source of complexity. There should exist a way to verify that copied code is coming from a safe source.

  4. Where is the repo or canonical source on the internet for this file? It might have a tag to identify the source. If you look at the human-visible footer for my transcription of this folk song, you'll notice "Original Musescore file at ...github URL."

  5. How can this file be identified uniquely? How can I get an identifier that persists across modifications, copies, forks, and hosts? A simple solution is a random number embedded in the file at the time of creation.

  6. When I change the license of third-party code, should I record and publish the licensing history? This would apply when the original license permits relicensing. For example, MIT can be converted to Apache 2.0, as far as I understand.

  7. When is it ok to modify the syntax (but not semantics) of license statements (in whatever format)? Can you convert a long license statement such as the classic GPL 1.0 blurb to the equivalent shortform SPDX identifier? Can an SBOM tool that attempts to identify the license on each file save its conclusions into the file? Is there a way to mark such output as tentative?

  8. Should the header be "Copyright", “copyright”, "(c)", "©" or nothing? A single canonical answer is the best path. Ambiguity is bad. A simple solution is a messaging campaign to standardize on just one marker.

  9. Is there a machine-friendly way to mark unlicensable content like generated files? A simple solution is a new license blurb and short-form identifier saying something like "This file contains uncopyrightable information."

Systemic Improvements to License Preambles

Open source is deeply reliant on blurbs in files or directories.

For example, the GPL how-to:

This involves adding two elements to each source file of your program: a copyright notice (such as “Copyright 1999 Terry Jones”), and a statement of copying permission, saying that the program is distributed under the terms of the GNU General Public License (or the Lesser GPL, or the Affero GPL).

There are obvious systemic problems:

  • Free text is hostile to machine processing. Programs to analyze this text are notoriously inaccurate. Human vetting is built in to the process.

  • Lack of clarity in how to handle edge cases. If you fork a library, is the statement "Copyright 1999 Them" or "Copyright 1999 You"?

  • Ambiguity over whether to use "C", "c.", "copyright", (c), ©

  • English only

These items are only the trivial ones that jumped out right away. Improvements are needed and possible.

Some ideas:

  • Program-friendly markup. JSON, for example.

  • Unambiguous syntax amenable to schema validation.

  • Detailed semantics with thorough documentation

  • Move most of this information to an external document on the web. The blurb would be something like\
    # {'license-on-this-file':''}

A question to you, the reader: do you know of innovations in this space? Are there ongoing efforts to work on these issues?

Followup post:

What's Stopping Library Upgrades?

Big vulnerabilities in upstream dependencies can linger in deployed software long past the point when a patch is available. Maven estimates that 35% of Log4J downloads continue to pull the version with the world-famous vulnerability.

What's the cause? Why aren't developers applying patches?

National Ecosystems

If you follow the 35% link above you'll see that countries have characteristic exposure profiles. Taiwan is far and away the worst, and China next.

Taiwan and China share a language but not a government. Maybe the problem is security resources that fit common practices. Is Mandarin well supported by Dependabot and similar tooling? Are technology news sources for developers not covering Log4J? Is there cultural skepticism? Are different development platforms (e.g. Gitea instead of GitHub) popular there, and is there a difference in security resources?

The market share of unpatched L4J in a given country is not the same as the market share at a global scale. Taiwan is tiny - even 80% unpatched downloads would have less impact on the global numbers than 20% of a huge country like China.

The follow-up work here is a country-specific study of China and Taiwan. What's holding back patches may be obvious to developers in these places.

Maturity Levels

When a codebase is mature, there is more resistance to change.

From Beyond Metadata: Code-Centric and Usage-Based Analysis of Known Vulnerabilities in Open-Source Software:

In the early phases of development, updating a library to a more recent release is relatively unproblematic, because the necessary adaptations in the application code can be performed as part of the normal development activities. On the other hand, as soon as a project gets closer to the date of release to customers, and during the entire operational lifetime, all updates need to be carefully pondered, because they can impact the release schedule, require additional effort, cause system downtime, or introduce new defects.

How can patches specifically target mature codebases?

Mature software will have older code and will tend to use older library versions. The biggest issue is simply providing non-breaking patches for older library versions. The older a library the less its developers want to work on it, and the greater the chance that an upgrade will only be available with a major version upgrade.

What can the security community do? Encourage library developers to support old versions. Discourage breaking changes of any kind. Encourage application developers to give preference to libraries with a record of support for older versions.


There may exist a patch but it may not be well vetted. Every upgrade is a chance for something to go wrong. There may be new bugs and vulnerabilities.

Ways to ameliorate the problem:

  • Encourage and help with automated tests
  • Have a trusted third party certify updates
  • Discourage library providers who lack the resources to make trustworthy upgrades

Lack of Auto-Upgrade

The reach of automated vulnerability scanning and patching is probably still pretty low, at a guess.

Vendored (copied and pasted) code is hard to scan or upgrade. Not all languages have high-quality scanning and upgrading. The CI/CD infrastructure for automated scanning and patching is relatively new. Package repositories like Maven lack facilities to force upgrades.


  1. Study improvements to vendored code detection and upgrade
  2. Identify needless gaps in tooling. For example, improve Dependabot availability in Gitea.
  3. Collaborate with package repositories on forced upgrades (or discouragement of known-bad versions)

Vulnerability Fatigue

Developers may be skeptical of vulnerability reports. There is a never-ending stream of announcements, but the daily impact is low.

Vulnerabilities may be in libraries that aren't used in production. Vulnerable open source dependencies: counting those that matter found that "about 20% of the dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice." Identifying whether a dependency is one of these can take a considerable amount of work.

To ameliorate this problem, security tooling can improve detection of which risks are not a factor in production.

Another source of fatigue is inflation. Developers get cynical about new vulnerability reports when they have ignored old ones without suffering harm.

To ameliorate this problem, there could be checks and limits on new reports. A report and patch should be accompanied by a score from the Common Vulnerability Scoring System Version 3.1 Calculator.

Structural Improvements to ERC-1155 Metadata

NFT metadata could easily be simpler to code, faster to load, less bug-prone, and easier to understand than with current specs. See example at the bottom. To comment, use GitHub Discussions.

ERC-1155 metadata has bits that are clumsy and inefficient.

  1. The decimals feature mixes presentation with data. ("The number of decimal places that the token amount should display - e.g. 18, means to divide the token amount by 1000000000000000000 to get its user representation"). This feature is a product design choice for front end developers. It's irrelevant to a contract and it relies on ultra-pricey on-chain real estate.

  2. The {id} interpolation feature violates the HATEOAS constraint of REST. The feature is defined as: "If the string {id} exists in any JSON value, it MUST be replaced with the actual token ID". Well designed APIs should use literal URIs generated by the servers that are generating the metadata JSON. (I bet the interpolation feature is not necessary.)

  3. The {id} interpolation feature reinvents the concept of server-side programming, like PHP. I doubt this was intentional. I think the data format was intended to be used within Solidity, and accidentally got incorporated into a new context.

  4. Localization support mixes non-localized data with localized data, has a weird extra field for a default locale, and requires a URI to be fetched for each locale. If locale-specific strings were inline, consuming these files would be faster and the code would be a simpler.

  5. Locales are defined by reference to, but this is not a list of locales, it is an organization that shepherds lists of locales.

  6. The example JSON in the published EIP has an obvious bug:

    "image": "https:\/\/\/your-bucket\/images\/{id}.png",

This is simply wrong. A ‘/’ character does not need to be escaped in JSON, and the resulting escaped strings are not legal URIs. The example value should be:

    "image": "{id}.png",
  1. There is no way to map from one of these external JSON documents to the token that it is annotating, so there is no way to know if they can be deleted except by searching every NFT that ever existed.

A future spec would:

  1. Nuke the decimals feature
  2. Nuke {id} interpolation
  3. Put localized data inline
  4. Separate localized and non-localized fields
  5. Eliminate the default locale - this belongs in the user-agent
  6. Fix the example data
  7. Require a locale name to be a subtag in

For the purposes of robustness, I'd also like to have:

  1. Ability to map from the metadata back to the token, so NFTs don't get hooked up to the wrong metadata. This requires a token ID in the JSON.

New and refactored example:

  "imageLink": "",
  "locales": {
    "en": {
      "name": "Advertising Space",
      "description": "Each token represents a unique ad space in the city."
    "es": {
      "name": "Espacio Publicitario",
      "description": "Cada token representa un espacio publicitario único en la ciudad."
    "fr ": {
      "name": "Espace Publicitaire",
      "description": "Chaque jeton représente un espace publicitaire unique dans la ville."
  "tokenID": "0x12f28e2106ce8fd8464885b80ea865e"

For comparison, see

Future topics:

  • I haven't figured out the properties object, which is a can of worms.
  • If the URI in an ERC1155 ERC1155Metadata_URI is IPFS, it is useless unless it is mutable, and it is only mutable if it is IPNS. Therefore, IPFS URIs with a non-IPNS path should be strongly discouraged.
  • Review for best practices
  • Much of the above is not related to metadata. It is about any sort of mutable data with a 1:1 relationship to an on-chain entity.

OSS Contribution Log: Blog on Logs merged into OWASP

I have had a PR merged into OWASP for the first time, a new Attacks On Logs section in the Logging Cheat Sheet. Given how much trouble it can be to get a PR merged into a new project, it's good to get a win.

This grew out of the Blog On Logs entry here.

HTML Should Support Markdown. Seriously.

Markdown has massive adoption. It clearly meets a need. The cost of HTML's power is verbosity, and sometimes that's the wrong tradeoff:

  • More typing. Writing HTML is slow.
  • More visual noise. Reading HTML is hard.

Browsers should support Markdown natively. The syntax should be part of HTML. There should be no need for a shim to translate.

This is very practical:

  • There is no Markdown syntax that can't be represented in HTML.
  • Markdown-to-HTML conversion is easy to implement.
  • Security risks are low.

The only non-trivial task is enabling CSS, which would require a canonical DOM representation of Markdown.

To the standards-mobile! This is a task for the HTML WG. I searched the mailing list and (surprizingly) didn't find discussion.

If you like this idea, all you have to do is discuss it in social media:

Blog on Logs

I am brainstorming security requirements for system logging. Can you think of others? Are some of these too lame to bother with? Do you know of specific attacks that might be relevant?

You can reply using an issue or email.

(Update Feb 21: This is documented by OWASP as Log Injection and by CWE as CWE-117. That documentation includes well-defined threat models).


Who should be able to read what? A confidentiality attack enables an unauthorized party to access sensitive information stored in logs.

  1. Logs contain PII of users. Attackers gather PII, then either release it or use it as a stepping stone for futher attacks on those users.
  2. Logs contain technical secrets such as passwords. Attackers use it as a stepping stone for deeper attacks.


Which information should be modifiable by whom?

  1. An attacker with read access to a log uses it to exfiltrate secrets.
  2. An attack leverages logs to connect with exploitable facets of logging platforms, such as sending in a payload over syslog in order to cause an out-of-bounds write.


What downtime is acceptable?

  1. An attacker floods log files in order to exhaust disk space available for non-logging facets of system functioning. For example, the same disk used for log files might be used for SQL storage of application data.
  2. An attacker floods log files in order to exhaust disk space available for further logging.
  3. An attacker uses one log entry to destroy other log entries.
  4. An attacker leverages poor performance of logging code to reduce application performance


Who is responsible for harm?

  1. An attacker prevent writes in order to cover their tracks.
  2. An attacker prevent damages the log in order to cover their tracks.
  3. An attacker causes the wrong identity to be logged in order to conceal the responsible party.

Liz Cheney's Heel-Face Turn

Cheney: I Do Not Recognize Those In My Party Who Have Abandoned The Constitution To Embrace Donald Trump

That seems familiar. Could it be the The heel-face turn?

When a bad guy turns good. The term "Heel Face Turn" comes from Professional Wrestling, in which an evil wrestler (a "heel") sometimes has a change of heart and becomes good, thereby becoming a "babyface". Magazines and other promotional material from the various wrestling leagues comment on various wrestlers' changes in alignment nearly as frequently as they cover events in the ring themselves.

That depends on who you ask:

The nature of Heel-Face Turn and Face–Heel Turn is subjective (one person's "seeing the light" is another person's "heartless betrayal or fall" depending on what group the individual is going to or leaving).

What do the other members of the league of supervillains think?

Republicans rebuke Liz Cheney in unprecedented moves

Oops. Maybe she has her tropes mixed up and thinks she's in one that ends better for her character.

In movies with more than one supervillain, it's usually only the villain that acts as the Big Bad that perishes; the lesser ones either are captured, reform, or return as the Big Bad in the sequel. --(Superhero Movie Villains Die)

To get a badge

Two years ago I did a lot of work related to badging (Example) using the Open Badges 2.0 standard. At the time I had little intuition about the value. Yesterday I got a certificate for completing a Linux Foundation course. It was surprisingly satisfying, so much so that I added it to my LinkedIn profile.

When I took the course, it was partly for the learning and partly for the badge. I want to be able to position myself as a subject matter expert, and both the learning and the credential are useful. The desire to acquire the badge validated my earlier assumption that badges do lead to action.

This badge does not use Open Badges 2.0 as far as I can tell. That standard appears to be stone cold dead. There was no mention of the standard anywhere in the process or visible code. What makes the badge valuable instead are the signatories and the branding. Hero text: "The Linux Foundation"; then, with facsimile signatures, "Clyde Seepersad, SVP & General Manager, Training and Certification The Linux Foundation" and "Kay Williams, Chair of the Governing Board Open Source Security Foundation (OpenSSF)."

The badges I was issuing wouldn't have been as effective. The signatories would have been missing. The underlying evaluation would be purely algorithmic.The branding would have been an unknown startup.

The badge I did receive is valuable enough that I paid for a course I could have taken for free, just because the certificate might be helpful for my career. Sharing a badge allows me to communicate that I have knowledge. Also, demonstrating completion of the coursework is relevant to The CII Best Practices badge, which has two tests that the certificate would influence:

  • The project MUST have at least one primary developer who knows how to design secure software.

  • At least one of the project's primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them.

You get what you measure. I wanted to show success on the metric, so I went and got the knowledge.

Diversity, equity, and inclusion in open source, and Americanism

If you're working on diversity, equity and inclusion in open source and you're American, don't assume America. The world is big. Don't assume race as understood in the US is the central issue.

Every part of the world has their own hierarchy of privilege.

Diversity and Inclusion in Nigeria:

The issue of diversity has world wide relevance. As Chairman Mao Tse-Tung said: “Let a thousand flowers bloom”. However I believe, like most issues, diversity adopts different meaning and flavor, depending on the locality you situate it.

Although English is the official language, more than half of the population do not understand and or speak formal English. Pidgin English is often a means of reaching out to a significant portion of the population, but it has limited appeal in the Northern part of the country. ... There are two dominant religious groups in Nigeria, namely Moslems and Christians. Unless the workforce reflects the two religious groupings, it stands the risk of being identified as ‘belonging’ to one groups or the other. It also runs the risk of offending members of the religious groups, sometimes out of sheer ignorance.

Castes in India:

Indian Caste System

Discrimination in China:

Although 56 different ethnic groups are officially recognized in China, the nation remains fairly homogenous, with over 90% of its citizens belonging to the Han Chinese group. People from different ethnic backgrounds, as well as foreigners, consequently stand out and may sometimes face discrimination and racism in China.

All you can safely assume is that the other people in your project are smarter than you and will flip the bozo bit if you fail to see beyond your privilege.

OSS Work Log

CHAOSS: created a draft metric model for security facets of sustainability.

XSPF: went to add Tess Gadwa and Evan Boehs to "about" page, made the changes, got ready to push, realize I did this already two months ago.

As a potential consumer of an open-source package, I must judge whether it is likely to introduce vulnerabilities or require updates in order to patch vulnerabilities.

Pwsafe: donated $20 to the maintainer, Rony Shapiro (GitHub).

This is the second donation I have made in roughly ten years of using his software. Then I went and stalked him on GitHub and Sourceforge. Just as I suspected, he's been patiently devoted for decades. I felt grateful and privileged.

Privacy Regulations for OSS Dev


This document is intended as a jumping-off point for people who need specifics about privacy regulations that affect open source development, whether in law or contracts. I can’t and shouldn’t offer legal advice. However, developers need to be able to educate themselves. This is a directory of resources.

This document is intended to evolve and grow. I invite you to contribute information on any jurisdictions you are familiar with.



Article 17 of International Covenant on Civil and Political Rights, 1966





United States


U.S. Code § 552a
(NIST) Guide to Protecting the Confidentiality of PII


OPPA 2003

CCPA 2018

Contract Law


Privacy Statement

Acceptable Use Policy

Linux Foundation

Telemetry Data Policy


Hacker News