Hello Mozillians,
We are happy to announce that Friday, February 5th, we are organizing Firefox 45.0 Beta 3 Testday. We will be focusing our testing on the following features: Search Refactoring, Synced Tabs Menu, Text to Speech and Grouped Tabs Migration. Check out the detailed instructions via this etherpad.
No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.
Join us and help us make Firefox better! See you on Friday!
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
This article has been coauthored by Aislinn Grigas, Senior Interaction Designer, Firefox Desktop
Cross posting with Mozilla’s Security Blog
November 3, 2015
Over the past few months, Mozilla has been improving the user experience of our privacy and security features in Firefox. One specific initiative has focused on the feedback shown in our address bar around a site’s security. The major changes are highlighted below along with the rationale behind each change.
Color and iconography is commonly used today to communicate to users when a site is secure. The most widely used patterns are coloring a lock icon and parts of the address bar green. This treatment has a straightforward rationale given green = good in most cultures. Firefox has historically used two different color treatments for the lock icon – a gray lock for Domain-validated (DV) certificates and a green lock for Extended Validation (EV) certificates. The average user is likely not going to understand this color distinction between EV and DV certificates. The overarching message we want users to take from both certificate states is that their connection to the site is secure. We’re therefore updating the color of the lock when a DV certificate is used to match that of an EV certificate.
Although the same green icon will be used, the UI for a site using EV certificates will continue to differ from a site using a DV certificate. Specifically, EV certificates are used when Certificate Authorities (CA) verify the owner of a domain. Hence, we will continue to include the organization name verified by the CA in the address bar.
A second change we’re introducing addresses what happens when a page served over a secure connection contains Mixed Content. Firefox’s Mixed Content Blocker proactively blocks Mixed Active Content by default. Users historically saw a shield icon when Mixed Active Content was blocked and were given the option to disable the protection.
Since the Mixed Content state is closely tied to site security, the information should be communicated in one place instead of having two separate icons. Moreover, we have seen that the number of times users override mixed content protection is slim, and hence the need for dedicated mixed content iconography is diminishing. Firefox is also using the shield icon for another feature in Private Browsing Mode and we want to avoid making the iconography ambiguous.
The updated design that ships with Firefox 42 combines the lock icon with a warning sign which represents Mixed Content. When Firefox blocks Mixed Active Content, we retain the green lock since the HTTP content is blocked and hence the site remains secure.
For users who want to learn more about a site’s security state, we have added an informational panel to further explain differences in page security. This panel appears anytime a user clicks on the lock icon in the address bar.
Previously users could click on the shield icon in the rare case they needed to override mixed content protection. With this new UI, users can still do this by clicking the arrow icon to expose more information about the site security, along with a disable protection button.
There is a second category of Mixed Content called Mixed Passive Content. Firefox does not block Mixed Passive Content by default. However, when it is loaded on an HTTPS page, we let the user know with iconography and text. In previous versions of Firefox, we used a gray warning sign to reflect this case.
We have updated this iconography in Firefox 42 to a gray lock with a yellow warning sign. We degrade the lock from green to gray to emphasize that the site is no longer completely secure. In addition, we use a vibrant color for the warning icon to amplify that there is something wrong with the security state of the page.
We also use this iconography when the certificate or TLS connection used by the website relies on deprecated cryptographic algorithms.
The above changes will be rolled out in Firefox 42. Overall, the design improvements make it simpler for our users to understand whether or not their interactions with a site are secure.
We have made similar changes to the site security indicators in Firefox for Android, which you can learn more about here.
Firefox for Windows, Mac and Linux now lets you choose to receive push notifications from websites if you give them permission. This is similar to Web notifications, except now you can receive notifications for websites even when they’re not loaded in a tab. This is super useful for websites like email, weather, social networks and shopping, which you might check frequently for updates.
You can manage your notifications in the Control Center by clicking the icon on the left side of the address bar.
Push Notifications for Web Developers
To make this functionality possible, Mozilla helped establish the Web Push W3C standard that’s gaining momentum across the Web. We also continue to explore the new design pattern known as Progressive Web Apps. If you’re a developer who wants to implement push notifications on your site, you can learn more in this Hacks blog post.
More information:
This is a quick write up to summarize my, and Jeff’s, experience, using RR to debug a fairly rare intermittent reftest failure. There’s still a lot of be learned about how to use RR effectively so I’m hoping sharing this will help others.
First given a offending pixel I was able to set a breakpoint on it using these instructions. Next using rr-dataflow I was able to step from the offending bad pixel to the display item responsible for this pixel. Let me emphasize this for a second since it’s incredibly impressive. rr + rr-dataflow allows you to go from a buffer, through an intermediate surface, to the compositor on another thread, through another intermediate surface, back to the main thread and eventually back to the relevant display item. All of this was automated except for when the two pixels are blended together which is logically ambiguous. The speed at which rr was able to reverse continue through this execution was very impressive!
Here’s the trace of this part: rr-trace-reftest-pixel-origin
From here I started comparing a replay of a failing test and a non failing step and it was clear that the DisplayList was different. In one we have a nsDisplayBackgroundColor in the other we don’t. From here I was able to step through the decoder and compare the sequence. This was very useful in ruling out possible theories. It was easy to step forward and backwards in the good and bad replay debugging sessions to test out various theories about race conditions and understanding at which part of the decode process the image was rejected. It turned out that we sent two decodes, one for the metadata that is used to sized the frame tree and the other one for the image data itself.
In hindsight, it would have been more effective to start debugging this test by looking at the frame tree (and I imagine for other tests looking at the display list and layer tree) first would have been a quicker start. It works even better if you have a good and a bad trace to compare the difference in the frame tree. From here, I found that the difference in the layer tree came from a change hint that wasn’t guaranteed to come in before the draw.
The problem is now well understood: When we do a sync decode on reftest draw, if there’s an image error we wont flush the style hints since we’re already too deep in the painting pipeline.
In the last two weeks, we landed 130 PRs in the Servo organization’s repositories.
After months of work by vlad and many others, Windows support landed! Thanks to everyone who contributed fixes, tests, reviews, and even encouragement (or impatience!) to help us make this happen.
geckolib
target to our CIScreencast of this post being upvoted on reddit… from Windows!
We had a meeting on some CI-related woes, documenting tags and mentoring, and dependencies for the style subsystem.
The Monday Project Meeting
With the release of Firefox 44, we are pleased to welcome the 28 developers who contributed their first code change to Firefox in this release, 23 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:
The image above was created by Bryan Mathers for our presentation at BETT last week. It shows the way that, in broad brushstrokes, learning design should happen. Before microcredentials such as Open Badges this was a difficult thing to do as both the credential and the assessment are usually given to educators. The flow tends to go backwards from credentials instead of forwards from what we want people to learn.
But what if you really were starting from scratch? How could you design a digital skills framework that contains knowledge, skills, and behaviours worth learning? Having written my thesis on digital literacies and led Mozilla’s Web Literacy Map for a couple of years, I’ve got some suggestions.
One of the most important things to define is who your audience is for your digital skills framework. Is it for learners to read? Who are they? How old are they? Are you excluding anyone on purpose? Why / why not?
You might want to do some research and work around user personas as part of a user-centred design approach. This ensures you’re designing for real people instead of figments of your imagination (or, worse still, in line with your prejudices).
It’s also good practice to make the language used in the skills framework as precise as possible. Jargon is technical language used for the sake of it. There may be times when it’s impossible not to use a word (e.g. ’meme’). If you do this then link to a definition or include a glossary. It’s also useful to check the ‘reading level’ of your framework and, if you really want a challenge, try using Up-Goer Five language.
It’s extremely easy, when creating a framework for learning, to fall into the 'knowledge trap’. Our aim when creating the raw materials from which someone can build a curriculum is to focus on action. Knowledge should make a difference in practice.
One straightforward way to ensure that you’re focusing on action rather than head knowledge is to use verbs when constructing your digital skills framework. If you’re familiar with Bloom’s Taxonomy, then you may find The Differentiator useful. This pairs verbs with the various levels of Bloom’s.
A framework needs to be a living, breathing thing. It should be subject to revision and updated often. For this reason, you should add version numbers to your documentation. Ideally, the latest version should be at a canonical URL and you should archive previous versions to static URLs.
I would also advise releasing the first version of your framework not as 'version 1.0’ but as 'v0.1’. This shows that you’re willing for others to provide input, that there will be further versions, and that you know you haven’t got it right first time (and forevermore).
Questions? Comments? Ask me on Twitter (@dajbelshaw). I also consult around this kind of thing, so hit me up on hello@dynamicskillset.com
Tonight is Robbie Burns night, in honour of that great Scottish poet. But tonight had me thinking about another night in my past.
It was about 5 years ago, maybe less, I struggle to remember now. I was in the UK visiting family and my Dad was sick. Cancer and it's treatment is tough, you have good weeks, you have bad weeks and you have really fucking bad weeks. This was a good week and for some reason I was in the UK.
Myself, my brother and my sister-in-law went down to see him that night. It was Robbie Burns night and that meant an excuse for haggis, really, truly terrible scotch, Scottish dancing and all that. There are many times when I look back at time with my Dad in those last few years. This was definitely one of those times. He was my Dad at his best, cracking jokes and having fun. Living life to the absolute fullest, while you still have that chance.
We had a great night. That ended way too soon.
Not long after that the cancer came back and that was that.
But suddenly tonight, in a bar in Portland I had these memories of my Dad in a waistcoat cracking jokes and having fun on Robbie Burns night. No-one else in the bar seemed to know what night it was. You'd think Robbie Burns night might get a little bit more appreciation, but hey.
In the many years I've been running this blog I've never written about my Dad passing away. Here's the first time. I miss him.
Hey Robbie Burns? Thanks for making me remember that night.
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.
This week's edition was edited by: nasa42, brson, and llogiq.
129 pull requests were merged in the last week.
See the triage digest and subteam reports for more details.
cargo init
.btree_set::{IntoIter, Iter, Range}
covariant.slice::binary_search
.std::sync::mpsc
: Add fmt::Debug
stubs.Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
[
to the FOLLOW(ty) in macro future-proofing rules.RangeInclusive
to use an enum.Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:
IndexAssign
trait that allows overloading "indexed assignment" expressions like a[b] = c
.alias
attribute to #[link]
and -l
.unreachable
and a lint.If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.
Tweet us at @ThisWeekInRust to get your job offers listed here!
This week's Crate of the Week is racer which powers code completion in all Rust development environments.
Thanks to Steven Allen for the suggestion.
Submit your suggestions for next week!
Memory errors are fundamentally state errors, and Rust's move semantics, borrowing, and aliasing XOR mutating help enormously for me to reason about how my program changes state as it executes, to avoid accidental shared state and side effects at a distance. Rust more than any other language I know enables me to do compiler driven design. And internalizing its rules has helped me design better systems, even in other languages.
— desiringmachines on /r/rust.
Thanks to dikaiosune for the suggestion.
Speaking of, downloadable fonts were exactly the same problem on the Sun Ultra-3 laptop I've been refurbishing; Oracle still provides a free Solaris 10 build of 38ESR, but it crashes on web fonts for reasons I have yet to diagnose, so I just have them turned off. Yes, it really is a SPARC laptop, a rebranded Tadpole Viper, and I think the fastest one ever made in this form factor (a 1.2GHz UltraSPARC IIIi). It's pretty much what I expected the PowerBook G5 would have been -- hot, overthrottled and power-hungry -- but Tadpole actually built the thing and it's not a disaster, relatively speaking. There's no JIT in this Firefox build, the brand new battery gets only 70 minutes of runtime even with the CPU clock-skewed to hell, it stands a very good chance of rendering me sterile and/or medium rare if I actually use it in my lap and it had at least one sudden overtemp shutdown and pooped all over the filesystem, but between Firefox, Star Office and pkgsrc I can actually use it. More on that for laughs in a future post.
It has been pointed out to me that Leopard Webkit has not made an update in over three months, so hopefully Tobias is still doing okay with his port.
Last year, we unveiled the Mozilla Open Software Patent License as part of our Initiative to help limit the negative impacts that patents have on open source software. While those were an important first step for us, we continue to do more. This past Wednesday, Mozilla joined several other tech and software companies in filing an amicus brief with the Supreme Court of the United States in the Halo and Stryker cases.
In the brief, we urge the Court to limit the availability of treble damages. Treble damages are significant because they greatly increase the amount of money owed if a defendant is found to “willfully infringe” a patent. As a result, many open source projects and technology companies will refuse to look into or engage in discussions about patents, in order to avoid even a remote possibility of willful infringement. This makes it very hard to address the chilling effects that patents can have on open source software development, open innovation, and collaborative efforts.
We hope that our brief will help the Court see how this legal standard has affected technology companies and persuade the Court to limit treble damages.
In Firefox 43, we made it a default requirement for add-ons to be signed. This requirement can be disabled by toggling a preference that was originally scheduled to be removed in Firefox 44 for release and beta versions (this preference will continue to be available in the Nightly, Developer, and ESR Editions of Firefox for the foreseeable future).
We are delaying the removal of this preference to Firefox 46 for a couple of reasons: We’re adding a feature in Firefox 45 that allows temporarily loading unsigned restartless add-ons in release, which will allow developers of those add-ons to use Firefox for testing, and we’d like this option to be available when we remove the preference. We also want to ensure that developers have adequate time to finish the transition to signed add-ons.
The updated timeline is available on the signing wiki, and you can look up release dates for Firefox versions on the releases wiki. Signing will be mandatory in the beta and release versions of Firefox from 46 onwards, at which point unbranded builds based on beta and release will be provided for testing.
Modernize infrastructure:
In a continuing effort to enable faster, more reliable, and more easily-run tests for TaskCluster components, Dustin landed support for an in-memory, credential-free mock of Azure Table Storage in the azure-entities package. Together with the fake mock support he added to taskcluster-lib-testing, this allows tests for components like taskcluster-hooks to run without network access and without the need for any credentials, substantially decreasing the barrier to external contributions.
All release promotion tasks are now signed by default. Thanks to Rail for his work here to help improve verifiability and chain-of-custody in our upcoming release process. (https://bugzil.la/1239682) Beetmover has been spotted in the wild! Jordan has been working on this new tool as part of our release promotion project. Beetmover helps move build artifacts from one place to another (generally between S3 buckets these days), but can also be extended to perform validation actions inline, e.g. checksums and anti-virus. (https://bugzil.la/1225899)
Dustin configured the “desktop-test” and “desktop-build” docker images to build automatically on push. That means that you can modify the Dockerfile under `testing/docker`, push to try, and have the try job run in the resulting image, all without pushing any images. This should enable much quicker iteration on tweaks to the docker images. Note, however, that updates to the base OS images (ubuntu1204-build and centos6-build) still require manual pushes.
Mark landed Puppet code for base windows 10 support including secrets and ssh keys management.
Improve CI pipeline:
Vlad and Amy repurposed 10 Windows XP machines as Windows 7 to improve the wait times in that test pool (https://bugzil.la/1239785) Armen and Joel have been working on porting the Gecko tests to run under TaskCluster, and have narrowed the failures down to the single digits. This puts us on-track to enable Linux debug builds and tests in TaskCluster as the canonical build/test process.
Release:
Ben finished up work on enhanced Release Blob validation in Balrog (https://bugzil.la/703040), which makes it much more difficult to enter bad data into our update server.
You may recall Mihai, our former intern who we just hired back in November. Shortly after joining the team, he jumped into the releaseduty rotation to provide much-needed extra bandwidth. The learning curve here is steep, but over the course of the Firefox 44 release cycle, he’s taken on more and more responsibility. He’s even volunteered to do releaseduty for the Firefox 45 release cycle as well. Perhaps the most impressive thing is that he’s also taken the time to update (or write) the releaseduty docs so that the next person who joins the rotation will be that much further ahead of the game. Thanks for your hard work here, Mihai!
Operational:
Hal did some cleanup work to remove unused mozharness configs and directories from the build mercurial repos. These resources have long-since moved into the main mozilla-central tree. Hopefully this will make it easier for contributors to find the canonical copy! (https://bugzil.la/1239003)
Hiring:
We’re still hiring for a full-time Build & Release Engineer, and we are still accepting applications for interns for 2016. Come join us!
Well, I don’t know about you, but all that hard work makes me hungry for pie. See you next week!
Mozilla Foundation Demos January 22 2016
Hello, SUMO Nation!
The third week of the new year is already behind us. Time flies when you’re not paying attention… What are you going to do this weekend? Let us know in the comments, if you feel like sharing :-) I hope to be in the mountains, getting some fresh (bracing) air, and enjoying nature.
Thank you for reading all the way down here… More to come next week! You know where to find us, so see you around – keep rocking the open & helpful web!
Bay Area Rust meetup for January 2016. Topics TBD.
Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.
There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.
First up was shobson with a cool demo of an animated disco ball made entirely with CSS. The demo uses a repeated radial gradient for the background, and linear gradients plus a border radius for the disco ball itself. The demo was made for use in shobson’s WordCamp talk about debugging CSS. A blog post with notes from the talk is available as well.
Next was craigcook, who presented Proton. It’s a CSS framework that is intentionally ugly to encourage use for prototypes only. Unlike other CSS frameworks, the temptation to reuse the classes from the framework in your final page doesn’t occur, which helps avoid the presentational classes that plague sites built using a framework normally.
Proton’s website includes an overview of the layout and components provided, as well as examples of prototypes made using the framework.
If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!
See you next month!
A lot of exciting things are happening with Participation at Mozilla this month. Here’s a quick round-up of some of the things that are going on!
Since the start of this year, the Participation Infrastructure team has had a renewed focus on making mozillians.org a modern community directory to meet Mozilla’s growing needs.
Their first target for 2016 was to improve the UX on the profile edit interface.
”We chose it due to relatively self-contained nature of it, and cause many people were not happy with the current UX. After research of existing tools and applying latest best practices, we designed, coded and deployed a new profile edit interface (which by the way is renamed to Settings now) that we are happy to deliver to all Mozillians.”
Read the full blog here!
Are you a passionate designer looking to contribute to Mozilla? You’ll be happy to hear there is a new way to contribute to the many design projects around Mozilla! Submit issues, find collaborators, and work on open source projects by getting involved!
Learn more here.
This weekend 136 participation leaders from all over the world are heading to Singapore to undergo two days of leadership training to develop the skills, knowledge and attitude to lead Participation in 2016.
Photo credit @thephoenixbird on Twitter
If you know someone attending don’t forget to share your questions and goals with them, and follow along over the weekend by watching the hashtag #MozSummit.
Stay tuned after the event for a debrief of the weekend!
If you’re interested in learning more about all the exciting new features, projects, and plans that were presented at Mozlando look no further! You can now watch the final plenary sessions on Air Mozilla (it’s a lot of fun so I highly recommend it!) here.
Share your questions and comments on discourse here.
Look forward to more updates like these in the coming months!
This was originally posted at StaySafeOnline.org in advance of Data Privacy Day.
Data Privacy Day – which arrives in just a week – is a day designed to raise awareness and promote best practices for privacy and data protection. It is a day that looks to the future and recognizes that we can and should do better as an industry. It reminds us that we need to focus on the importance of having the trust of our users.
We seek to build trust so we can collectively create the Web our users want – the Web we all want.
That Web is based on relationships, the same way that the offline world is. When I log in to a social media account, schedule a grocery delivery online or browse the news, I’m relying on those services to respect my data. While companies are innovating their products and services, they need to be innovating on user trust as well, which means designing to address privacy concerns – and making smart choices (early!) about how to manage data.
A recent survey by Pew highlights the thought that each user puts into their choices – and the contextual considerations in various scenarios. They concluded that many participants were annoyed and uncertain by how their information was used, and they are choosing not to interact with those services that they don’t trust. This is a clear call to businesses to foster more trust with their users, which starts by making sure that there are people empowered within your company to ask the right questions: what do your users expect? What data do you need to collect? How can you communicate about that data collection? How should you protect their data? Is holding on to data a risk, or should you delete it?
It’s crucial that users are a part of this process – consumers’ data is needed to offer cool, new experiences and a user needs to trust you in order to choose to give you their data. Pro-user innovation can’t happen in a vacuum – the system as it stands today isn’t doing a good job of aligning user interests with business incentives. Good user decisions can be good business decisions, but only if we create thoughtful user-centric products in a way that closes the feedback loop so that positive user experiences are rewarded with better business outcomes.
Not prioritizing privacy in product decisions will impact the bottom line. From the many data breaches over the last few years to increasing evidence of eroding trust in online services, data practices are proving to be the dark horse in the online economy. When a company loses user trust, whether on privacy or anything else, it loses customers and the potential for growth.
Privacy means different things to different people but what’s clear is that people make decisions about the products and services that they use based on how those companies choose to treat their users. Over this time, the Internet ecosystem has evolved, as has its relationship with users – and some aspects of this evolution threaten the trust that lies at the heart of that relationship. Treating a user as a target – whether for an ad, purchase, or service – undermines the trust and relationship that a business may have with a consumer.
The solution is not to abandon the massive value that robust data can bring to users, but rather, to collect and use data leanly, productively and transparently. At Mozilla, we have created a strong set of internal data practices to ensure that data decisions align with our privacy principles. As an industry, we need to keep users at the center of the product vision rather than viewing them as targets of the product – it’s the only way to stay true to consumers and deliver the best, most trusted experiences possible.
Want to hear more about how businesses can build relationships with their users by focusing on trust and privacy? We’re holding events in Washington, D.C., and San Francisco with some of our partners to talk about it. Please join us!
Gathering data from Certificate Transparency logs, here's a snapshot in time of Let's Encrypt's certificate issuance rate per minute from 7-21 January 2016. On 20 January, DreamHost launched formal support for Let's Encrypt, which coincides with a rate increase.
Note: This is mostly an experimental post with embedding charts; I've
Gathering data from Certificate Transparency logs, here's a snapshot in time of Let's Encrypt's certificate issuance rate per minute from 7-21 January 2016. On 20 January, DreamHost launched formal support for Let's Encrypt, which coincides with a rate increase.
Note: This is mostly an experimental post with embedding charts; I've more data in the queue.
This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.
A few days ago the fantastic Fritz from the Netherlands told me that my Hands On Web Audio slides had stopping working and there was no sound coming out from them in Firefox.
@supersole oh noes! I reopened your slides: https://t.co/SO35UfljMI and it doesn't work in @firefox anymore
(works in chrome though..
)
— Boring Stranger (@fritzvd) January 11, 2016
Which is pretty disappointing for a slide deck that is built to teach you about Web Audio!
I noticed that the issue was only on the introductory slide which uses a modified version of Stuart Memo’s fantastic THX sound recreation-the rest of slides did play sound.
I built an isolated test case (source) that used a parameter-capable version of the THX sound code, just in case the issue depended on the number of oscillators, and submitted this funnily titled bug to the Web Audio component: Entirely Web Audio generated sound cuts out after a little while, or emits random tap tap tap sounds then silence.
I can happily confirm that the bug has been fixed in Nightly and the fix will hopefully be “uplifted” to DevEdition very soon, as it was due to a regression.
Paul Adenot (who works in Web Audio and is a Web Audio spec editor, amongst a couple tons of other cool things) was really excited about the bug, saying it was very edge-casey! Yay! And he also explained what did actually happen in lay terms: “you’d have to have a frequency that goes down very very slowly so that the FFT code could not keep up”, which is what the THX sound is doing with the filter frequency automation.
I want to thank both Fritz for spotting this out and letting me know and also Stuart for sharing his THX code. It’s amazing what happens when you put stuff on the net and lots of different people use it in different ways and configurations. Together we make everything more robust
Of course also sending thanks to Paul and Ben for identifying and fixing the issue so fast! It’s not been even a week! Woohoo!
Well done everyone!
Since the start of this year, Participation Infrastructure team has a renewed focus on making mozillians.org a modern community directory to meet Mozilla’s growing needs. This will not be an one-time effort. We need to invest technically and programmatically in order to deliver a first-class product that will be the foundation for identity management across the Mozilla ecosystem.
Mozillians.org is full of functionality as it is today, but is paying the debt of being developed by 5 different teams over the past 5 years. We started simple this time. Updated all core technology pieces, did privacy and security reviews, and started the process of consolidating and modernizing many of the things we do in the site.
Our first target was Profile Edit. We chose it due to relatively self-contained nature of it, and cause many people were not happy with the current UX. After research of existing tools and applying latest best practices, we designed, coded and deployed a new profile edit interface (which by the way is renamed to Settings now) that we are happy to deliver to all Mozillians.
Have a look for yourself and don’t miss the chance to update your profile while you do it!
Nikos (on the front-end), Tasos and Nemo (on the back-end) worked hard to deliver this in a speedy manner (as they are used to), and the end result is a testament to what is coming next on Mozillians.org.
Our next target? Groups. Currently it is obscure and unclear what all those settings in groups are, what is the functionality and how teams within Mozilla will be using it. We will be tackling this soon. After that, search and stats will be our attention, in an ongoing effort to fortify mozillians.org functionality. Stay tuned, and as always feel free to file bugs and contribute in the process.
Last year I joked…
Thinking about writing a blog post listing the blog posts I’ve been meaning to write… Maybe that will save some time
— Adam Lofting (@adamlofting) November 20, 2015
Now, it has come to this.
But my most requested blog by far, is an update on the status of my shed / office that I was tagging on to the end my blog posts at this time last year. Many people at Mozfest wanted to know about the shed… so here it is.
This time last year:
Starting in the new office today. It will take time to make it *nice* but it works for now. pic.twitter.com/sWoC4kFNLc
— Adam Lofting (@adamlofting) January 28, 2015
Some pictures from this morning:
It’s a pretty nice place to work now and it doubles as useful workshop on the weekends. It needs a few finishing touches, but the law of diminishing returns means those finishing touches are lower priority than work that needs to be done elsewhere in the house and garden. So it’ll stay like this a while longer.
The benefit of being a father again (Freya my 3rd child, was born last week) is that while on paternity leave & between two baby bottles, I can hack on fun stuff.
A few months ago, I've built for my running club a Pelican-based website, check it out at : http://acr-dijon.org. Nothing's special about it, except that I am not the one feeding it. The content is added by people from the club that have zero knowledge about softwares, let alone stuff like vim or command line tools.
I set up a github-based flow for them, where they add content through the github UI and its minimal reStructuredText preview feature - and then a few of my crons update the website on the server I host. For images and other media, they are uploading them via FTP using FireSSH in Firefox.
For the comments, I've switched from Disqus to ISSO after I got annoyed by the fact that it was impossible to display a simple Disqus UI for people to comment without having to log in.
I had to make my club friends go through a minimal reStructuredText syntax training, and things are more of less working now.
The system has a few caveats though:
So I've decided to build my own web editing tool with the following features:
The first step was to build a reStructuredText parser that would read some reStructuredText and render it back into a cleaner version.
We've imported almost 2000 articles in Pelican from the old blog, so I had a lot of samples to make my parser work well.
I first tried rst2rst but that parser was built for a very specific use case (text wrapping) and was incomplete. It was not parsing all of the reStructuredText syntax.
Inspired by it, I wrote my own little parser using docutils.
Understanding docutils is not a small task. This project is very powerfull but quite complex. One thing that cruelly misses in docutils parser tools is the ability to get the source text from any node, including its children, so you can render back the same source.
That's roughly what I had to add in my code. It's ugly but it does the job: it will parse rst files and render the same content, minus all the extraneous empty lines, spaces, tabs etc.
Content browsing is pretty straightforward: my admin tool let you browse the Pelican content directory and lists all articles, organized by categories.
In our case, each category has a top directory in content. The browser parses the articles using my parser and displays paginated lists.
I had to add a cache system for the parser, because one of the directory contains over 1000 articles -- and browsing was kind of slow :)
The last big bit was the live editor. I've stumbled on a neat little tool called rsted, that provides a live preview of the reStructuredText as you are typing it. And it includes warnings !
Check it out: http://rst.ninjs.org/
I've stripped it and kept what I needed, and included it in my app.
I am quite happy with the result so far. I need to add real tests and a bit of documentation, and I will start to train my club friends on it.
The next features I'd like to add are:
The project lives here: https://github.com/AcrDijon/henet
I am not going to release it, but if someone finds it useful, I could.
It's built with Bottle & Bootstrap as well.
I wrote a long and probably dull chapter on closures and first-class and higher-order functions in Rust. It goes into some detail on the implementation and some of the subtleties like higher-ranked lifetime bounds.
I was going to post it here too, but it is really too long. Instead, pop
I wrote a long and probably dull chapter on closures and first-class and higher-order functions in Rust. It goes into some detail on the implementation and some of the subtleties like higher-ranked lifetime bounds.
I was going to post it here too, but it is really too long. Instead, pop over to the 'Rust for C++ programmers' repo and read it there.
I’m hacking on an assembly project, and wanted to document some of the tricks I was using for figuring out what was going on. This post might seem a little basic for folks who spend all day heads down in gdb or who do this stuff professionally, but I just wanted to share a quick intro to some tools that others may find useful. (oh god, I’m doing it)
If your coming from gdb to lldb, there’s a few differences in commands. LLDB has great documentation on some of the differences. Everything in this post about LLDB is pretty much there.
The bread and butter commands when working with gdb or lldb are:
You can hit enter if you want to run the last command again, which is really useful if you want to keep stepping over statements repeatedly.
I’ve been using LLDB on OSX. Let’s say I want to debug a program I can build, but is crashing or something:
1
|
|
Setting a breakpoint on jump to label:
1 2 |
|
Running the program until breakpoint hit:
1 2 3 4 5 6 7 8 9 10 |
|
Seeing more of the current stack frame:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
Getting a back trace (call stack):
1 2 3 4 5 6 7 |
|
peeking at the upper stack frame:
1 2 3 4 5 6 7 |
|
back down to the breakpoint-halted stack frame:
1 2 3 4 5 6 7 |
|
dumping the values of registers:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
read just one register:
1 2 |
|
When you’re trying to figure out what system calls are made by some C code, using dtruss is very helpful. dtruss is available on OSX and seems to be some kind of wrapper around DTrace.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
If you compile with -g
to emit debug symbols, you can use lldb’s disassemble
command to get the equivalent assembly:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
Anyways, I’ve been learning some interesting things about OSX that I’ll be sharing soon. If you’d like to learn more about x86-64 assembly programming, you should read my other posts about writing x86-64 and a toy JIT for Brainfuck (the creator of Brainfuck liked it).
I should also do a post on Mozilla’s rr, because it can do amazing things like step backwards. Another day…
Every new year gives you an opportunity to sit back, relax, have some scotch and re-think the passed year. Holidays give you enough free time. Even if you decide to not take a vacation around the holidays, it's usually calm and peaceful.
This time, I found myself thinking mostly about productivity, being effective, feeling busy, overwhelmed with work and other related topics.
When I started at Mozilla (almost 6 years ago!), I tried to apply all my GTD and time management knowledge and techniques. Working remotely and in a different time zone was an advantage - I had close to zero interruptions. It worked perfect.
Last year I realized that my productivity skills had faded away somehow. 40h+ workweeks, working on weekends, delivering goals in the last week of quarter don't sound like good signs. Instead of being productive I felt busy.
"Every crisis is an opportunity". Time to make a step back and reboot myself. Burning out at work is not a good idea. :)
Here are some ideas/tips that I wrote down for myself you may found useful.
This is my list of things that I try to use everyday. Looking forward to see improvements!
I would appreciate your thoughts this topic. Feel free to comment or send a private email.
Happy Productive New Year!
Hello 2016! We’re happy to announce the first Rust release of the year, 1.6. Rust is a systems programming language focused on safety, speed, and concurrency.
As always, you can install Rust 1.6 from the appropriate page on our website, and check out the detailed release notes for 1.6 on GitHub. About 1100 patches were landed in this release.
This release contains a number of small refinements, one major feature, and a change to Crates.io.
The largest new feature in 1.6 is that libcore
is now stable! Rust’s
standard library is two-tiered: there’s a small core library, libcore
, and
the full standard library, libstd
, that builds on top of it. libcore
is
completely platform agnostic, and requires only a handful of external symbols
to be defined. Rust’s libstd
builds on top of libcore
, adding support for
memory allocation, I/O, and concurrency. Applications using Rust in the
embedded space, as well as those writing operating systems, often eschew
libstd
, using only libcore
.
libcore
being stabilized is a major step towards being able to write the
lowest levels of software using stable Rust. There’s still future work to be
done, however. This will allow for a library ecosystem to develop around
libcore
, but applications are not fully supported yet. Expect to hear more
about this in future release notes.
About 30 library functions and methods are now stable in 1.6. Notable improvements include:
The drain()
family of functions on collections. These methods let you move
elements out of a collection while allowing them to retain their backing
memory, reducing allocation in certain situations.
A number of implementations of From
for converting between standard library
types, mainly between various integral and floating-point types.
Finally, Vec::extend_from_slice()
, which was previously known as
push_all()
. This method has a significantly faster implementation than the
more general extend()
.
See the detailed release notes for more.
If you maintain a crate on Crates.io, you might have seen a warning: newly uploaded crates are no longer allowed to use a wildcard when describing their dependencies. In other words, this is not allowed:
[dependencies]
regex = "*"
Instead, you must actually specify a specific version or range of
versions, using one of the semver
crate’s various options: ^
,
~
, or =
.
A wildcard dependency means that you work with any possible version of your dependency. This is highly unlikely to be true, and causes unnecessary breakage in the ecosystem. We’ve been advertising this change as a warning for some time; now it’s time to turn it into an error.
We had 132 individuals contribute to 1.6. Thank you so much!
One of the advantages of listing an add-on or theme on addons.mozilla.org (AMO) is that you’ll get statistics on your add-on’s usage. These stats, which are covered by the Mozilla privacy policy, provide add-on developers with information such as the number of downloads and daily users, among other insights.
Currently, the data that generates these statistics can go back as far as 2007, as we haven’t had an archiving policy. As a result, statistics take up the vast majority of disk space in our database and require a significant amount of processing and operations time. Statistics over a year old are very rarely accessed, and the value of their generation is very low, while the costs are increasing.
To reduce our operating and development costs, and increase the site’s reliability for developers, we are introducing an archiving policy.
In the coming weeks, statistics data over one year old will no longer be stored in the AMO database, and reports generated from them will no longer be accessible through AMO’s add-on statistics pages. Instead, the data will be archived and maintained as plain text files, which developers can download. We will write a follow-up post when these archives become available.
If you’ve chosen to keep your add-on’s statistics private, they will remain private when stats are archived. You can check your privacy settings by going to your add-on in the Developer Hub, clicking on Edit Listing, and then Technical Details.
The total number of users and other cumulative counts on add-ons and themes will not be affected and these will continue to function.
If you have feedback or concerns, please head to our forum post on this topic.
mconley livehacks on real Firefox bugs while thinking aloud.
One of the things the Firefox team has been doing recently is having onboarding sessions for new hires. This onboarding currently covers:
My first day consisted of some useful HR presentations and then I was given my laptop and a pointer to a wiki page on building Firefox. Needless to say, it took me a while to get started! It would have been super convenient to have an introduction to all the stuff above.
I’ve been asked to do the C++ and Gecko session three times. All of the sessions are open to whoever wants to come, not just the new hires, and I think yesterday’s session was easily the most well-attended yet: somewhere between 10 and 20 people showed up. Yesterday’s session was the first session where I made the slides available to attendees (should have been doing that from the start…) and it seemed equally useful to make the slides available to a broader audience as well. The Gecko and C++ Onboarding slides are up now!
This presentation is a “living” presentation; it will get updated for future sessions with feedback and as I think of things that should have been in the presentation or better ways to set things up (some diagrams would be nice…). If you have feedback (good, bad, or ugly) on particular things in the slides or you have suggestions on what other things should be covered, please contact me! Next time I do this I’ll try to record the presentation so folks can watch that if they prefer.
Brendan is back, and he has a plan to save the Web. Its a big and bold plan, and it may just work. I am pretty excited about this. If you have 5 minutes to read along I’ll explain why I think you should be as well.
The Web is broken
Lets face it, the Web today is a mess. Everywhere we go online we are constantly inundated with annoying ads. Often pages are more ads than content, and the more ads the industry throws at us, the more we ignore them, the more obnoxious ads get, trying to catch our attention. As Brendan explains in his blog post, the browser used to be on the user’s side—we call browsers the user agent for a reason. Part of the early success of Firefox was that it blocked popup ads. But somewhere over the last 10 years of modern Web browsers, browsers lost their way and stopped being the user’s agent alone. Why?
Browsers aren’t free
Making a modern Web browser is not free. It takes hundreds of engineers to make a competitive modern browser engine. Someone has to pay for that, and that someone needs to have a reason to pay for it. Google doesn’t make Chrome for the good of mankind. Google makes Chrome so you can consume more Web and along with it, more Google ads. Each time you click on one, Google makes more money. Chrome is a billion dollar business for Google. And the same is true for pretty much every other browser. Every major browser out there is funded through advertisement. No browser maker can escape this dilemma. Maybe now you understand why no major browser ships with a builtin enabled by default ad-blocker, even though ad-blockers are by far the most popular add-ons.
Our privacy is at stake
It’s not just the unregulated flood of advertisement that needs a solution. Every ad you see is often selected based on sensitive private information advertisement networks have extracted from your browsing behavior through tracking. Remember how the FBI used to track what books Americans read at the library, and it was a big scandal? Today the Googles and Facebooks of the world know almost every site you visit, everything you buy online, and they use this data to target you with advertisement. I am often puzzled why people are so afraid of the NSA spying on us but show so little concern about all the deeply personal data Google and Facebook are amassing about everyone.
Blocking alone doesn’t scale
I wish the solution was as easy as just blocking all ads. There is a lot of great Web content out there: news, entertainment, educational content. It’s not free to make all this content, but we have gotten used to consuming it “for free”. Banning all ads without an alternative mechanism would break the economic backbone of the Web. This dilemma has existed for many years, and the big browser vendors seem to have given up on it. It’s hard to blame them. How do you disrupt the status quo without sawing off the (ad revenue) branch you are sitting on?
It takes an newcomer to fix this mess
I think its unlikely that the incumbent browser vendors will make any bold moves to solve this mess. There is too much money at stake. I am excited to see a startup take a swipe at this problem, because they have little to lose (seed money aside). Brave is getting the user agent back into the game. Browsers have intentionally remained silent onlookers to the ad industry invading users’ privacy. With Brave, Brendan makes the user agent step up and fight for the user as it was always intended to do.
Brave basically consists of two parts: part one blocks third party ad content and tracking signals. Instead of these Brave inserts alternative ad content. Sites can sign up to get a fair share of any ads that Brave displays for them. The big change in comparison to the status quo is that the Brave user agent is in control and can regulate what you see. It’s like a speed limit for advertisement on the Web, with the goal to restore balance and give sites a fair way to monetize while giving the user control through the user agent.
Making money with a better Web
The ironic part of Brave is that its for-profit. Brave can make money by reducing obnoxious ads and protecting your privacy at the same time. If Brave succeeds, it’s going to drain money away from the crappy privacy-invasive obnoxious advertisement world we have today, and publishers and sites will start transacting in the new Brave world that is regulated by the user agent. Brave will take a cut of these transactions. And I think this is key. It aligns the incentives right. The current funding structure of major browsers encourages them to keep things as they are. Brave’s incentive is to bring down the whole diseased temple and usher in a better Web. Exciting.
Quick update: I had a chance to look over the Brave GitHub repo. It looks like the Brave Desktop browser is based on Chromium, not Gecko. Yes, you read that right. Brave is using Google’s rendering engine, not Mozilla’s. Much to write about this one, but it will definitely help Brave “hide” better in the large volume of Chrome users, making it harder for sites to identify and block Brave users. Brave for iOS seems to be a fork of Firefox for iOS, but it manages to block ads (Mozilla says they can’t).
@media (-webkit-transform-3d)
is a funny thing that exists on the web.
It's like, a media query feature in the form of a prefixed CSS property, which should tell you if your (once upon a time probably Safari-only) browser supports 3D transforms, invented back in the day before we had @supports
.
(According to Apple docs it first appeared in Safari 4, along side the other -webkit-transition
and -webkit-transform-2d
hybrid-media-query-feature-prefixed-css-properties-things that you should immediately forget exist.)
Older versions of Modernizr used this (and only this) to detect support for 3D transforms, and that seemed pretty OK. (They also did the polite thing and tested @media (transform-3d)
, but no browser has ever actually supported that, as it turns out). And because they're so consistently polite, they've since updated the test to prefer @supports
too (via a pull request from Edge developer Jacob Rossi).
As it turns out other browsers have been updated to support 3D CSS transforms, but sites didn't go back and update their version of Modernizr. So unless you support @media (-webkit-transform-3d)
these sites break. Niche websites like yahoo.com and about.com.
So, anyways. I added @media (-webkit-transform-3d)
to the Compat Standard and we added support for it Firefox so websites stop breaking.
But you shouldn't ever use it—use @supports
. In fact, don't even share this blog post. Maybe delete it from your browser history just in case.
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
As an effort to reduce the APK size of Firefox for Android and to remove unnecessary code, I will be helping remove the Honeycomb code throughout the Fennec project. Honeycomb will not be supported since Firefox 46, so this code is not necessary.
Bug 1217675 will keep track of the
As an effort to reduce the APK size of Firefox for Android and to remove unnecessary code, I will be helping remove the Honeycomb code throughout the Fennec project. Honeycomb will not be supported since Firefox 46, so this code is not necessary.
Bug 1217675 will keep track of the progress.
Hopefully this will help reduce the APK size some and clean up the road for killing Gingerbread hopefully sometime in the near future.
Since June of last year, I’ve been co-founding a new startup called Brave Software with Brendan Eich. With our amazing team, we're developing something pretty epic.
We're building the next-generation of browsers for smartphones and laptops as part of our new ad-tech platform. Our terms of use give our users control over their personal data by blocking ad trackers and third party cookies. We re-integrate fewer and better ads directly into programmatic ad positions, paying revenue shares to users and publishers to support both of these essential parties in the web ecosystem.
Coming built in, we have new faster engines for tracking protection, ad block, HTTPS Everywhere, safe ads with rev-share, and more. We're seeing massive web page load time speedups.
We're starting to bring people in for early developer build access on all platforms.
I’m happy to share that the browsers we’re developing were made fully open sourced. We welcome contributors, and would love your help.
Some of the repositories include:
I put the slides for my ManhattanJS talk, "PIEfection" up on GitHub the other day (sans images, but there are links in the source for all of those).
I completely neglected to talk about the Maillard reaction, which is responsible for food tasting good, and specifically for browning pie crusts.
I put the slides for my ManhattanJS talk, "PIEfection" up on GitHub the other day (sans images, but there are links in the source for all of those).
I completely neglected to talk about the Maillard reaction, which is responsible for food tasting good, and specifically for browning pie crusts. tl;dr: Amino acid (protein) + sugar + ~300°F (~150°C) = delicious. There are innumerable and poorly understood combinations of amino acids and sugars, but this class of reaction is responsible for everything from searing stakes to browning crusts to toasting marshmallows.
Above ~330°F, you get caramelization, which is also a delicious part of the pie and crust, but you don't want to overdo it. Starting around ~400°F, you get pyrolysis (burning, charring, carbonization) and below 285°F the reaction won't occur (at least not quickly) so you won't get the delicious compounds.
(All of these are, of course, temperatures measured in the material, not in the air of the oven.)
So, instead of an egg wash on your top crust, try whole milk, which has more sugar to react with the gluten in the crust.
I also didn't get a chance to mention a rolling technique I use, that I learned from a cousin of mine, in whose baking shadow I happily live.
When rolling out a crust after it's been in the fridge, first roll it out in a long stretch, then fold it in thirds; do it again; then start rolling it out into a round. Not only do you add more layer structure (mmm, flaky, delicious layers) but it'll fill in the cracks that often form if you try to roll it out directly, resulting in a stronger crust.
Those pepper flake shakers, filled with flour, are a great way to keep adding flour to the workspace without worrying about your buttery hands.
For transferring the crust to the pie plate, try rolling it up onto your rolling pin and unrolling it on the plate. Tapered (or "French") rolling pins (or wine bottle) are particularly good at this since they don't have moving parts.
Finally, thanks again to Jenn for helping me get pies from one island to another. It would not have been possible without her!
L'omniprésence des réseaux sociaux, des moteurs de recherches et de la publicité est-elle compatible avec notre droit à la vie privée ?
Four score and many moons ago, I decided to move this blog from Blogger to WordPress. The transition took longer than expected, but it’s finally done.
If you’ve been following along at the old address, https://mykzilla.blogspot.com/, now’s the time to update your address book! If you’ve been going to https://mykzilla.org/, however, or you read the blog on Planet Mozilla, then there’s nothing to do, as that’s the new address, and Planet Mozilla has been updated to syndicate posts from it.
Dieser Beitrag wurde zuerst im Blog auf https://blog.mozilla.org/community veröffentlicht. Herzlichen Dank an Aryx und Coce für die Übersetzung!
Auf der ganzen Welt arbeiten leidenschaftliche Mozillianer am Fortschritt für Mozillas Mission. Aber fragt man fünf verschiedene Mozillianer, was die Mission ist, erhält man womöglich sieben verschiedene Antworten.
Am Ende des letzten Jahres legte Mozillas CEO Chris Beard klare Vorstellungen über Mozillas Mission, Vision und Rolle dar und zeigte auf, wie unsere Produkte uns diesem Ziel in den nächsten fünf Jahren näher bringen. Das Ziel dieser strategischen Leitlinien besteht darin, für Mozilla insgesamt ein prägnantes, gemeinsames Verständnis unserer Ziele zu entwickeln, die uns als Individuen das Treffen von Entscheidungen und Erkennen von Möglichkeiten erleichtert, mit denen wir Mozilla voranbringen.
Mozillas Mission können wir nicht alleine erreichen. Die Tausenden von Mozillianern auf der ganzen Welt müssen dahinter stehen, damit wir zügig und mit lauterer Stimme als je zuvor Unglaubliches erreichen können.
Deswegen ist eine der sechs strategischen Initiativen des Participation Teams für die erste Jahreshälfte, möglichst viele Mozillianer über diese Leitlinien aufzuklären, damit wir 2016 den bisher wesentlichsten Einfluss erzielen können. Wir werden einen weiteren Beitrag veröffentlichen, der sich näher mit der Strategie des Participation Teams für das Jahr 2016 befassen wird.
Das Verstehen dieser Strategie wird unabdingbar sein für jeden, der bei Mozilla in diesem Jahr etwas bewirken möchte, denn sie wird bestimmen, wofür wir eintreten, wo wir unsere Ressourcen einsetzen und auf welche Projekte wir uns 2016 konzentrieren werden.
Zu Jahresbeginn werden wir näher auf diese Strategie eingehen und weitere Details dazu bekanntgeben, wie die diversen Teams und Projekte bei Mozilla auf diese Ziele hinarbeiten.
Der aktuelle Aufruf zum Handeln besteht darin, im Kontext Ihrer Arbeit über diese Ziele nachzudenken und darüber, wie Sie im kommenden Jahr bei Mozilla mitwirken möchten. Dies hilft, Ihre Innovationen, Ambitionen und Ihren Einfluss im Jahr 2016 zu gestalten.
Wir hoffen, dass Sie mitdiskutieren und Ihre Fragen, Kommentare und Pläne für das Vorantreiben der strategischen Leitlinien im Jahr 2016 hier auf Discourse teilen und Ihre Gedanken auf Twitter mit dem Hashtag #Mozilla2016Strategy mitteilen.
Unsere Mission
Dafür zu sorgen, dass das Internet eine weltweite öffentliche Ressource ist, die allen zugänglich ist.
Unsere Vision
Ein Internet, für das Menschen tatsächlich an erster Stelle stehen. Ein Internet, in dem Menschen ihr eigenes Erlebnis gestalten können. Ein Internet, in dem die Menschen selbst entscheiden können sowie sicher und unabhängig sind.
Unsere Rolle
Mozilla setzt sich im wahrsten Sinne des Wortes in Ihrem Online-Leben für Sie ein. Wir setzen uns für Sie ein, sowohl in Ihrem Online-Erlebnis als auch für Ihre Interessen beim Zustand des Internets.
Unsere Arbeit
Unsere Säulen
Wir wir positive Veränderungen in Zukunft anpacken wollen
Die Arbeitsweise ist ebensowichtig wie das Ziel. Unsere Gesundheit und bleibender Einfluss hängen davon ab, wie sehr unsere Produkte und Aktivitäten:
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
Last week we ran an internal “hack day” here at the Mozilla space in London. It was just a bunch of software engineers looking at various hardware boards and things and learning about them
Here’s what we did!
I essentially kind of bricked my Arduino Duemilanove trying to get it working with Johnny Five, but it was fine–apparently there’s a way to recover it using another Arduino, and someone offered to help with that in the next NodeBots London, which I’m going to attend.
Thinks he’s having issues with cables. It seems like the boards are not reset automatically by the Arduino IDE nowadays? He found the button in the board actually resets the board when pressed i.e. it’s the RESET button.
On the Raspberry Pi side of things, he was very happy to put all his old-school Linux skills in action configuring network interfaces without GUIs!
Played with mDNS advertising and listening to services on Raspberry Pi.
(He was very quiet)
(He also built a very nice LEGO case for the Raspberry Pi, but I do not have a picture, so just imagine it).
Wilson: “I got my Raspberry Pi on the Wi-Fi”
Francisco: “Sorry?”
Wilson: “I mean, you got my Raspberry Pi on the network. And now I’m trying to build a web app on the Pi…”
Exploring the Pebble with Linux. There’s a libpebble, and he managed to connect…
(sorry, I had to leave early so I do not know what else did Chris do!)
Updated, 20 January: Chris told me he just managed to successfully connect to the Pebble watch using the bluetooth WebAPI. It requires two Gecko patches (one regression patch and one obvious logic error that he hasn’t filed yet). PROGRESS!
~~~
So as you can see we didn’t really get super far in just a day, and I even ended up with unusable hardware. BUT! we all learned something, and next time we know what NOT to do (or at least I DO KNOW what NOT to do!).
Back in December I got a desperate email from this person. A woman who said her Instagram had been hacked and since she found my contact info in the app she mailed me and asked for help. I of course replied and said that I have nothing to do with her being hacked but I also have nothing to do with Instagram other than that they use software I’ve written.
Today she writes back. Clearly not convinced I told the truth before, and now she strikes back with more “evidence” of my wrongdoings.
Dear Daniel,
I had emailed you a couple months ago about my “screen dumps” aka screenshots and asked for your help with restoring my Instagram account since it had been hacked, my photos changed, and your name was included in the coding. You claimed to have no involvement whatsoever in developing a third party app for Instagram and could not help me salvage my original Instagram photos, pre-hacked, despite Instagram serving as my Photography portfolio and my career is a Photographer.
Since you weren’t aware that your name was attached to Instagram related hacking code, I thought you might want to know, in case you weren’t already aware, that your name is also included in Spotify terms and conditions. I came across this information using my Spotify which has also been hacked into and would love your help hacking out of Spotify. Also, I have yet to figure out how to unhack the hackers from my Instagram so if you change your mind and want to restore my Instagram to its original form as well as help me secure my account from future privacy breaches, I’d be extremely grateful. As you know, changing my passwords did nothing to resolve the problem. Please keep in mind that Facebook owns Instagram and these are big companies that you likely don’t want to have a trail of evidence that you are a part of an Instagram and Spotify hacking ring. Also, Spotify is a major partner of Spotify so you are likely familiar with the coding for all of these illegally developed third party apps. I’d be grateful for your help fixing this error immediately.
Thank you,
[name redacted]
P.S. Please see attached screen dump for a screen shot of your contact info included in Spotify (or what more likely seems to be a hacked Spotify developed illegally by a third party).
Here’s the Instagram screenshot she sent me in a previous email:
I’ve tried to respond with calm and clear reasonable logic and technical details on why she’s seeing my name there. That clearly failed. What do I try next?
I was recently asked an excellent question when I promoted the LFNW CFP on IRC:
As someone who has never done a talk, but wants to, what kind of knowledge do you need about a subject to give a talk on it?
If you answer “yes” to any of the following questions, you know enough to propose a talk:
I personally try to propose talks I want to hear, because the dealine of a CFP or conference is great motivation to prioritize a cool project over ordinary chores.
Howdy mozillians!
Last week – on Friday, January 15th – we held Aurora 45.0 Testday; and, of course, it was another outstanding event!
Thank you all –
– for the participation!A big thank you to all our active moderators too!
Results:
I strongly advise everyone of you to reach out to us, the moderators, via #qa during the events when you encountered any kind of failures. Keep up the great work! \o/
And keep an eye on QMO for upcoming events!
For the last three years I have had the opportunity to send out a reminder to Mozilla staff that Martin Luther King Jr. Day is coming up, and that U.S. employees get the day off. It has turned into my MLK Day eve ritual. I read his letters, listen to speeches, and then I compose a belabored paragraph about Dr. King with some choice quotes.
If you didn’t get a chance to celebrate Dr. King’s legacy and the movements he was a part of, you still have a chance:
As I outlined in an earlier post, libmacro is a new crate designed to be used by procedural macro authors. It provides the basic API for procedural macros to interact with the compiler. I expect higher level functionality to be provided by library crates. In this post I'll go into
As I outlined in an earlier post, libmacro is a new crate designed to be used by procedural macro authors. It provides the basic API for procedural macros to interact with the compiler. I expect higher level functionality to be provided by library crates. In this post I'll go into a bit more detail about the API I think should be exposed here.
This is a lot of stuff. I've probably missed something. If you use syntax extensions today and do something with libsyntax that would not be possible with libmacro, please let me know!
I previously introduced MacroContext
as one of the gateways to libmacro. All procedural macros will have access to a &mut MacroContext
.
I described the tokens
module in the last post, I won't repeat that here.
There are a few more things I thought of. I mentioned a TokenStream
which is a sequence of tokens. We should also have TokenSlice
which is a borrowed slice of tokens (the slice to TokenStream
's Vec
). These should implement the standard methods for sequences, in particular they support iteration, so can be map
ed, etc.
In the earlier blog post, I talked about a token kind called Delimited
which contains a delimited sequence of tokens. I would like to rename that to Sequence
and add a None
variant to the Delimiter
enum. The None
option is so that we can have blocks of tokens without using delimiters. It will be used for noting unsafety and other properties of tokens. Furthermore, it is useful for macro expansion (replacing the interpolated AST tokens currently present). Although None
blocks do not affect scoping, they do affect precedence and parsing.
We should provide API for creating tokens. By default these have no hygiene information and come with a span which has no place in the source code, but shows the source of the token to be the procedural macro itself (see below for how this interacts with expansion of the current macro). I expect a make_
function for each kind of token. We should also have API for creating macros in a given scope (which do the same thing but with provided hygiene information). This could be considered an over-rich API, since the hygiene information could be set after construction. However, since hygiene is fiddly and annoying to get right, we should make it as easy as possible to work with.
There should also be a function for creating a token which is just a fresh name. This is useful for creating new identifiers. Although this can be done by interning a string and then creating a token around it, it is used frequently enough to deserve a helper function.
Procedural macros should report errors, warnings, etc. via the MacroContext
. They should avoid panicking as much as possible since this will crash the compiler (once catch_panic
is available, we should use it to catch such panics and exit gracefully, however, they will certainly still meaning aborting compilation).
Libmacro will 're-export' DiagnosticBuilder
from syntax::errors. I don't actually expect this to be a literal re-export. We will use libmacro's version of Span
, for example.
impl MacroContext {
pub fn struct_error(&self, &str) -> DiagnosticBuilder;
pub fn error(&self, Option<Span>, &str);
}
pub mod errors {
pub struct DiagnosticBuilder { ... }
impl DiagnosticBuilder { ... }
pub enum ErrorLevel { ... }
}
There should be a macro try_emit!
, which reduces a Result<T, ErrStruct>
to a T or calls emit()
and then calls unreachable!()
(if the error is not fatal, then it should be upgraded to a fatal error).
The simplest function here is tokenize
which takes a string (&str
) and returns a Result<TokenStream, ErrStruct>
. The string is treated like source text. The success option is the tokenised version of the string. I expect this function must take a MacroContext
argument.
We will offer a quasi-quoting macro. This will return a TokenStream
(in contrast to today's quasi-quoting which returns AST nodes), to be precise a Result<TokenStream, ErrStruct>
. The string which is quoted may include metavariables ($x
), and these are filled in with variables from the environment. The type of the variables should be either a TokenStream
, a TokenTree
, or a Result<TokenStream, ErrStruct>
(in this last case, if the variable is an error, then it is just returned by the macro). For example,
fn foo(cx: &mut MacroContext, tokens: TokenStream) -> TokenStream {
quote!(cx, fn foo() { $tokens }).unwrap()
}
The quote!
macro can also handle multiple tokens when the variable corresponding with the metavariable has type [TokenStream]
(or is dereferencable to it). In this case, the same syntax as used in macros-by-example can be used. For example, if x: Vec<TokenStream>
then quote!(cx, ($x),*)
will produce a TokenStream
of a comma-separated list of tokens from the elements of x
.
Since the tokenize
function is a degenerate case of quasi-quoting, an alternative would be to always use quote!
and remove tokenize
. I believe there is utility in the simple function, and it must be used internally in any case.
These functions and macros should create tokens with spans and hygiene information set as described above for making new tokens. We might also offer versions which takes a scope and uses that as the context for tokenising.
There are some common patterns for tokens to follow in macros. In particular those used as arguments for attribute-like macros. We will offer some functions which attempt to parse tokens into these patterns. I expect there will be more of these in time; to start with:
pub mod parsing {
// Expects `(foo = "bar"),*`
pub fn parse_keyed_values(&TokenSlice, &mut MacroContext) -> Result<Vec<(InternedString, String)>, ErrStruct>;
// Expects `"bar"`
pub fn parse_string(&TokenSlice, &mut MacroContext) -> Result<String, ErrStruct>;
}
To be honest, given the token design in the last post, I think parse_string
is unnecessary, but I wanted to give more than one example of this kind of function. If parse_keyed_values
is the only one we end up with, then that is fine.
The goal with the pattern matching API is to allow procedural macros to operate on tokens in the same way as macros-by-example. The pattern language is thus the same as that for macros-by-example.
There is a single macro, which I propose calling matches
. Its first argument is the name of a MacroContext
. Its second argument is the input, which must be a TokenSlice
(or dereferencable to one). The third argument is a pattern definition. The macro produces a Result<T, ErrStruct>
where T
is the type produced by the pattern arms. If the pattern has multiple arms, then each arm must have the same type. An error is produced if none of the arms in the pattern are matched.
The pattern language follows the language for defining macros-by-example (but is slightly stricter). There are two forms, a single pattern form and a multiple pattern form. If the first character is a {
then the pattern is treated as a multiple pattern form, if it starts with (
then as a single pattern form, otherwise an error (causes a panic with a Bug
error, as opposed to returning an Err
).
The single pattern form is (pattern) => { code }
. The multiple pattern form is {(pattern) => { code } (pattern) => { code } ... (pattern) => { code }}
. code
is any old Rust code which is executed when the corresponding pattern is matched. The pattern follows from macros-by-example - it is a series of characters treated as literals, meta-variables indicated with $
, and the syntax for matching multiple variables. Any meta-variables are available as variables in the righthand side, e.g., $x
becomes available as x
. These variables have type TokenStream
if they appear singly or Vec<TokenStream>
if they appear multiply (or Vec<Vec<TokenStream>>
and so forth).
Examples:
matches!(cx, input, (foo($x:expr) bar) => {quote(cx, foo_bar($x).unwrap()}).unwrap()
matches!(cx, input, {
() => {
cx.err("No input?");
}
(foo($($x:ident),+ bar) => {
println!("found {} idents", x.len());
quote!(($x);*).unwrap()
}
}
})
Note that since we match AST items here, our backwards compatibility story is a bit complicated (though hopefully not much more so than with current macros).
The intention of the design is that the actual hygiene algorithm applied is irrelevant. Procedural macros should be able to use the same API if the hygiene algorithm changes (of course the result of applying the API might change). To this end, all hygiene objects are opaque and cannot be directly manipulated by macros.
I propose one module (hygiene
) and two types: Context
and Scope
.
A Context
is attached to each token and contains all hygiene information about that token. If two tokens have the same Context
, then they may be compared syntactically. The reverse is not true - two tokens can have different Context
s and still be equal. Context
s can only be created by applying the hygiene algorithm and cannot be manipulated, only moved and stored.
MacroContext
has a method fresh_hygiene_context
for creating a new, fresh Context
(i.e., a Context
not shared with any other tokens).
MacroContext
has a method expansion_hygiene_context
for getting the Context
where the macro is defined. This is equivalent to .expansion_scope().direct_context()
, but might be more efficient (and I expect it to be used a lot).
A Scope
provides information about a position within an AST at a certain point during macro expansion. For example,
fn foo() {
a
{
b
c
}
}
a
and b
will have different Scope
s. b
and c
will have the same Scope
s, even if b
was written in this position and c
is due to macro expansion. However, a Scope
may contain more information than just the syntactic scopes, for example, it may contain information about pending scopes yet to be applied by the hygiene algorithm (i.e., information about let
expressions which are in scope).
Note that a Scope
means a scope in the macro hygiene sense, not the commonly used sense of a scope declared with {}
. In particular, each let
statement starts a new scope and the items and statements in a function body are in different scopes.
The functions lookup_item_scope
and lookup_statement_scope
take a MacroContext
and a path, represented as a TokenSlice
, and return the Scope
which that item defines or an error if the path does not refer to an item, or the item does not define a scope of the right kind.
The function lookup_scope_for
is similar, but returns the Scope
in which an item is declared.
MacroContext
has a method expansion_scope
for getting the scope in which the current macro is being expanded.
Scope
has a method direct_context
which returns a Context
for items declared directly (c.f., via macro expansion) in that Scope
.
Scope
has a method nested
which creates a fresh Scope
nested within the receiver scope.
Scope
has a static method empty
for creating an empty scope, that is one with no scope information at all (note that this is different from a top-level scope).
I expect the exact API around Scope
s and Context
s will need some work. Scope
seems halfway between an intuitive, algorithm-neutral abstraction, and the scopes from the sets of scopes hygiene algorithm. I would prefer a Scope
should be more abstract, on the other hand, macro authors may want fine-grained control over hygiene application.
pub mod hygiene {
pub fn add(cx: &mut MacroContext, t: &Token, scope: &Scope) -> Token;
// Maybe unnecessary if we have direct access to Tokens.
pub fn set(t: &Token, cx: &Context) -> Token;
// Maybe unnecessary - can use set with cx.expansion_hygiene_context().
// Also, bad name.
pub fn current(cx: &MacroContext, t: &Token) -> Token;
}
add
adds scope
to any context already on t
(Context
should have a similar method). Note that the implementation is a bit complex - the nature of the Scope
might mean we replace the old context completely, or add to it.
By default, the current macro will be expanded in the standard way, having hygiene applied as expected. Mechanically, hygiene information is added to tokens when the macro is expanded. Assuming the sets of scopes algorithm, scopes (for example, for the macro's definition, and for the introduction) are added to any scopes already present on the token. A token with no hygiene information will thus behave like a token in a macro-by-example macro. Hygiene due to nested scopes created by the macro do not need to be taken into account by the macro author, this is handled at expansion time.
Procedural macro authors may want to customise hygiene application (it is common in Racket), for example, to introduce items that can be referred to by code in the call-site scope.
We must provide an option to expand the current macro without applying hygiene; the macro author must then handle hygiene. For this to work, the macro must be able to access information about the scope in which it is applied (see MacroContext::expansion_scope
, above) and to supply a Scope
indicating scopes that should be added to tokens following the macro expansion.
pub mod hygiene {
pub enum ExpansionMode {
Automatic,
Manual(Scope),
}
}
impl MacroContext {
pub fn set_hygienic_expansion(hygiene::ExpansionMode);
}
We may wish to offer other modes for expansion which allow for tweaking hygiene application without requiring full manual application. One possible mode is where the author provides a Scope
for the macro definition (rather than using the scope where the macro is actually defined), but hygiene is otherwise applied automatically. We might wish to give the author the option of applying scopes due to the macro definition, but not the introduction scopes.
On a related note, might we want to affect how spans are applied when the current macro is expanded? I can't think of a use case right now, but it seems like something that might be wanted.
Blocks of tokens (that is a Sequence
token) may be marked (not sure how, exactly, perhaps using a distinguished context) such that it is expanded without any hygiene being applied or spans changed. There should be a function for creating such a Sequence
from a TokenSlice
in the tokens
module. The primary motivation for this is to handle the tokens representing the body on which an annotation-like macro is present. For a 'decorator' macro, these tokens will be untouched (passed through by the macro), and since they are not touched by the macro, they should appear untouched by it (in terms of hygiene and spans).
We provide functionality to expand a provided macro or to lookup and expand a macro.
pub mod apply {
pub fn expand_macro(cx: &mut MacroContext,
expansion_scope: Scope,
macro: &TokenSlice,
macro_scope: Scope,
input: &TokenSlice)
-> Result<(TokenStream, Scope), ErrStruct>;
pub fn lookup_and_expand_macro(cx: &mut MacroContext,
expansion_scope: Scope,
macro: &TokenSlice,
input: &TokenSlice)
-> Result<(TokenStream, Scope), ErrStruct>;
}
These functions apply macro hygiene in the usual way, with expansion_scope
dictating the scope into which the macro is expanded. Other spans and hygiene information is taken from the tokens. expand_macro
takes pending scopes from macro_scope
, lookup_and_expand_macro
uses the proper pending scopes. In order to apply the hygiene algorithm, the result of the macro must be parsable. The returned scope will contain pending scopes that can be applied by the macro to subsequent tokens.
We could provide versions that don't take an expansion_scope
and use cx.expansion_scope()
. Probably unnecessary.
pub mod apply {
pub fn expand_macro_unhygienic(cx: &mut MacroContext,
macro: &TokenSlice,
input: &TokenSlice)
-> Result<TokenStream, ErrStruct>;
pub fn lookup_and_expand_macro_unhygienic(cx: &mut MacroContext,
macro: &TokenSlice,
input: &TokenSlice)
-> Result<TokenStream, ErrStruct>;
}
The _unhygienic
variants expand a macro as in the first functions, but do not apply the hygiene algorithm or change any hygiene information. Any hygiene information on tokens is preserved. I'm not sure if _unhygienic
are the right names - using these is not necessarily unhygienic, just that we are automatically applying the hygiene algorithm.
Note that all these functions are doing an eager expansion of macros, or in Scheme terms they are local-expand
functions.
The function lookup_item
takes a MacroContext
and a path represented as a TokenSlice
and returns a TokenStream
for the item referred to by the path, or an error if name resolution failed. I'm not sure where this function should live.
pub mod strings {
pub struct InternedString;
impl InternedString {
pub fn get(&self) -> String;
}
pub fn intern(cx: &mut MacroContext, s: &str) -> Result<InternedString, ErrStruct>;
pub fn find(cx: &mut MacroContext, s: &str) -> Result<InternedString, ErrStruct>;
pub fn find_or_intern(cx: &mut MacroContext, s: &str) -> Result<InternedString, ErrStruct>;
}
intern
interns a string and returns a fresh InternedString
. find
tries to find an existing InternedString
.
A span gives information about where in the source code a token is defined. It also gives information about where the token came from (how it was generated, if it was generated code).
There should be a spans
module in libmacro, which will include a Span
type which can be easily inter-converted with the Span
defined in libsyntax. Libsyntax spans currently include information about stability, this will not be present in libmacro spans.
If the programmer does nothing special with spans, then they will be 'correct' by default. There are two important cases: tokens passed to the macro and tokens made fresh by the macro. The former will have the source span indicating where they were written and will include their history. The latter will have no source span and indicate they were created by the current macro. All tokens will have the history relating to expansion of the current macro added when the macro is expanded. At macro expansion, tokens with no source span will be given the macro use-site as their source.
Span
s can be freely copied between tokens.
It will probably useful to make it easy to manipulate spans. For example, rather than point at the macro's defining function, point at a helper function where the token is made. Or to set the origin to the current macro when the token was produced by another which should an implementation detail. I'm not sure what such an interface should look like (and is probably not necessary in an initial library).
pub mod features {
pub enum FeatureStatus {
// The feature gate is allowed.
Allowed,
// The feature gate has not been enabled.
Disallowed,
// Use of the feature is forbidden by the compiler.
Forbidden,
}
pub fn query_feature(cx: &MacroContext, feature: Token) -> Result<FeatureStatus, ErrStruct>;
pub fn query_feature_by_str(cx: &MacroContext, feature: &str) -> Result<FeatureStatus, ErrStruct>;
pub fn query_feature_unused(cx: &MacroContext, feature: Token) -> Result<FeatureStatus, ErrStruct>;
pub fn query_feature_by_str_unused(cx: &MacroContext, feature: &str) -> Result<FeatureStatus, ErrStruct>;
pub fn used_feature_gate(cx: &MacroContext, feature: Token) -> Result<(), ErrStruct>;
pub fn used_feature_by_str(cx: &MacroContext, feature: &str) -> Result<(), ErrStruct>;
pub fn allow_feature_gate(cx: &MacroContext, feature: Token) -> Result<(), ErrStruct>;
pub fn allow_feature_by_str(cx: &MacroContext, feature: &str) -> Result<(), ErrStruct>;
pub fn disallow_feature_gate(cx: &MacroContext, feature: Token) -> Result<(), ErrStruct>;
pub fn disallow_feature_by_str(cx: &MacroContext, feature: &str) -> Result<(), ErrStruct>;
}
The query_*
functions query if a feature gate has been set. They return an error if the feature gate does not exist. The _unused
variants do not mark the feature gate as used. The used_
functions mark a feature gate as used, or return an error if it does not exist.
The allow_
and disallow_
functions set a feature gate as allowed or disallowed for the current crate. These functions will only affect feature gates which take affect after parsing and expansion are complete. They do not affect feature gates which are checked during parsing or expansion.
Question: do we need the used_
functions? Could just call query_
and ignore the result.
We need some mechanism for setting attributes as used. I don't actually know how the unused attribute checking in the compiler works, so I can't spec this area. But, I expect MacroContext
to make available some interface for reading attributes on a macro use and marking them as used.
Over the last 3 weeks, based on feedback we proceeded fledging out the concepts and the code behind Skizze.
Neil Patel suggested the following:
So I've been thinking about the server API. I think we want to choose one thing and do it as well as possible, instead of having
Over the last 3 weeks, based on feedback we proceeded fledging out the concepts and the code behind Skizze.
Neil Patel suggested the following:
So I've been thinking about the server API. I think we want to choose one thing and do it as well as possible, instead of having six ways to talk to the server. I think that helps to keep things sane and simple overall.
Thinking about usage, I can only really imagine Skizze in an environment like ours, which is high-throughput. I think that is it's 'home' and we should be optimising for that all day long.
Taking that into account, I believe we have two options:
We go the gRPC route, provide .proto files and let people use the existing gRPC tooling to build support for their favourite language. That means we can happily give Ruby/Node/C#/etc devs a real way to get started up with Skizze almost immediately, piggy-backing on the gRPC docs etc.
We absorb the Redis Protocol. It does everything we need, is very lean, and we can (mostly) easily adapt it for what we need to do. The downside is that to get support from other libs, there will have to be actual libraries for every language. This could slow adoption, or it might be easy enough if people can reuse existing REDIS code. It's hard to tell how that would end up.
gRPC is interesting because it's built already for distributed systems, across bad networks, and obviously is bi-directional etc. Without us having to spend time on the protocol, gRPC let's us easily add features that require streaming. Like, imagine a client being able to listen for changes in count/size and be notified instantly. That's something that gRPC is built for right now.
I think gRPC is a bit verbose, but I think it'll pay off for ease of third-party lib support and as things grow.
The CLI could easily be built to work with gRPC, including adding support for streaming stuff etc. Which could be pretty exciting.
That being said, we gave Skizze a new home, where based on feedback we developed .proto files and started rewriting big chunks of the code.
We added a new wrapper called "domain" which represents a stream. It wraps around Count-Min-Log, Bloom Filter, Top-K and HyperLogLog++, so when feeding it values it feeds all the sketches. Later we intend to allow attaching and detaching sketches from "domains" (We need a better name).
We also implemented a gRPC API which should allow easy wrapper creation in other languages.
Special thanks go to Martin Pinto for helping out with unit tests and Soren Macbeth for thorough feedback and ideas about the "domain" concept.
Take a look at our initial REPL work there:
Note: I don’t work for Mozilla any more, so (like Adele) these are my thoughts ‘from the outside’…
Open Badges is no longer a Mozilla project. In fact, it hasn’t been for a while — the Badge Alliance was set up a couple of years ago to promote the specification on a both a technical and community basis. As I stated in a recent post, this is a good thing and means that the future is bright for Open Badges.
However, Mozilla is still involved with the Open Badges project: Mark Surman, Executive Director of the Mozilla Foundation, sits on the board of the Badge Alliance. Mozilla also pays for contractors to work on the Open Badges backpack and there were badges earned at the Mozilla Festival a few months ago.
Although it may seem strange for those used to corporates interested purely in profit, Mozilla creates what the open web needs at any given time. Like any organisation, sometimes it gets these wrong, either because the concept was flawed, or because the execution was poor. Other times, I’d argue, Mozilla doesn’t give ideas and concepts enough time to gain traction.
Open Badges, at its very essence, is a technical specification. It allows credentials with metadata hard-coded into them to be issued, exchanged, and displayed. This is done in a secure, standardised manner.
For users to be able to access their ‘backpack’ (i.e. the place they store badges) they needed a secure login system.Back in 2011 at the start of the Open Badges project it made sense to make use of Mozilla’s nascent Persona project. This aimed to provide a way for users to easily sign into sites around the web without using their Facebook/Google logins. These ‘social’ sign-in methods mean that users are tracked around the web — something that Mozilla was obviously against.
By 2014, Persona wasn’t seen to be having the kind of ‘growth trajectory’ that Mozilla wanted. The project was transferred to community ownership and most of the team left Mozilla in 2015. It was announced that Persona would be shutting down as a Mozilla service in November 2016. While Persona will exist as an open source project, it won’t be hosted by Mozilla.
Although I’m not aware of an official announcement from the Badge Alliance, I think it’s worth making three points here.
If you’re a developer, you can still use Persona. It’s open source. It works.
The Open Badges backpack is one place where users can store their badges. There are others, including the Open Badge Passport and Open Badge Academy. MacArthur, who seed-funded the Open Badges ecosystem, have a new platform launching through LRNG.
It is up to the organisations behind these various solutions as to how they allow users to authenticate. They may choose to allow social logins. They may force users to create logins based on their email address. They may decide to use an open source version of Persona. It’s entirely up to them.
The Persona authentication system runs off email addresses. This means that transitioning from Persona to another system is relatively straightforward. It has, however, meant that for the past few years we’ve had a recurrent problem: what do you do with people being issued badges to multiple email addresses?
Tying badges to emails seemed like the easiest and fastest way to get to a critical mass in terms of Open Badge adoption. Now that’s worked, we need to think in a more nuanced way about allowing users to tie multiple identities to a single badge.
Persona was always a slightly awkward fit for Open Badges. Although, for a time, it made sense to use Persona for authentication to the Open Badges backpack, we’re now in a post-Persona landscape. This brings with it certain advantages.
As Nate Otto wrote in his post Open Badges in 2016: A Look Ahead, the project is growing up. It’s time to move beyond what was expedient at the dawn of Open Badges and look to the future. I’m sad to see the decline of Persona, but I’m excited what the future holds!
Header image CC BY-NC-SA Barbara
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.
This week's edition was edited by: nasa42, brson, and llogiq.
164 pull requests were merged in the last week.
See the triage digest and subteam reports for more details.
str::replace
take a pattern.impl
for Box<Error>
from String.Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:
[
to the FOLLOW(ty) in macro future-proofing rules.for
loop desugaring to use language items.IndexAssign
trait that allows overloading "indexed assignment" expressions like a[b] = c
.alias
attribute to #[link]
and -l
.some!
macro for unwrapping Option more safely.volatile_load
and volatile_store
intrinsics as ptr::volatile_read
and ptr::volatile_write
.If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.
Tweet us at @ThisWeekInRust to get your job offers listed here!
This week's Crate of the Week is toml, a crate for all our configuration needs, simple yet effective.
Thanks to Steven Allen for the suggestion.
Submit your suggestions for next week!
Borrow/lifetime errors are usually Rust compiler bugs. Typically, I will spend 20 minutes detailing the precise conditions of the bug, using language that understates my immense knowledge, while demonstrating sympathetic understanding of the pressures placed on a Rust compiler developer, who is also probably studying for several exams at the moment. The developer reading my bug report may not understand this stuff as well as I do, so I will carefully trace the lifetimes of each variable, where memory is allocated on the stack vs the heap, which struct or function owns a value at any point in time, where borrows begin and where they... oh yeah, actually that variable really doesn't live long enough.
Thanks to Wa Delma for the suggestion.
In my previous post, I started discussing in more detail what my internship entails, by talking about my first contribution to Servo. As a refresher, my first contribution was as part of my application to Outreachy, which I later revisited during my internship after a change I introduced to the HTML Standard it relied on. I’m going to expand on that last point today- specifically, how easy it is to introduce changes in WHATWG’s various standards. I’m also going to talk about how this accessibility to changing web standards affects how I can understand it, how I can help improve it, and my work on Servo.
There are many ways to get involved with WHATWG, but there are two that I’ve become the most familiar with: firstly, by opening a discussion about a perceived issue and asking how it should be resolved; secondly, by taking on an issue approved as needing change and making the desired change. I’ve almost entirely only done the former, and the latter only for some minor typos. Any changes that relate directly to my work, however minor, are significant for me though! Like I discussed in my previous post, I brought attention to an inconsistency that was resolved, giving me a new task of updating my first contribution to Servo to reflect the change in the HTML Standard. I’ve done that several times since, for the Fetch Standard.
My first two weeks of my internship were spent on reading through the majority of the Fetch Standard, primarily the various Fetch functions. I took many notes describing the steps to myself, annotated with questions I had and the answers I got from either other people on the Servo team who had worked with Fetch (including my internship mentor, of course!) or people from WHATWG who were involved in the Fetch Standard. Getting so familiar with Fetch meant a few things: I would notice minor errors (such as an out of date link) that I could submit a simple fix for, or a bigger issue that I couldn’t resolve myself.
I’m going to go into more detail about some of those bigger issues. From my perspective, when I start a discussion about a piece of documentation (such as the Fetch Standard, or reading about a programming library Servo uses), I go into it thinking “Either this documentation is incorrect, or my understanding is incorrect”. Whichever the answer is, it doesn’t mean that the documentation is bad, or that I’m bad at reading comprehension. I understand best by building up a model of something in my head, putting that to practice, and asking a lot of questions along the way. I learn by getting things wrong and figuring out why I was wrong, and sometimes in the process I uncover a point that could be made more clear, or an inconsistency! I have good examples of both of the different outcomes I listed, which I’ll cover over the next two sections.
Early on in my initial review of the Fetch Standard’s several protocols, I found a major step that seemed to have no use. I understood that since I was learning Fetch on a step-by-step basis, I did not have a view of the bigger picture, so I asked around what I was missing that would help me understand this. One of the people I work with on implementing Fetch agreed with me that the step seemed to have no purpose, and so we decided to open an issue asking about removing it from the standard. It turned out that I had actually missed the meaning of it, as we learned. However, instead of leaving it there, I shifted the issue into asking for some explanatory notes on why this step is needed, which was fulfilled. This meant that I would have a reference to go back to should I forget the significance of the step, and that people reading the Fetch Standard in the future would be much less likely to come to the same incorrect conclusion I had.
Shortly after I had first discovered that apparent issue, I found myself struggling to comprehend a sequence of actions in another Fetch protocol. The specification seemed to say that part of an early step was meant to only be done after the final step. I unfortunately don’t remember details of the discussion I had about this- if there was a reason for why it was organized like this, I forget what it was. Regardless, it was agreed that moving those sub-steps to be actually listed after the step they’re supposed to run after would be a good change. This meant that I would need to re-organize my notes to reflect the re-arranged sequence of actions, as well as have an easier time being able to follow this part of the Fetch Standard.
Like I said at the start of this post, I’m going to talk about how changes in the Fetch Standard affects my work on Servo itself. What I’ve covered so far has mostly been how changes affect my understanding of the standard itself. A key aspect in understanding the Fetch protocols is reviewing them for updates that impact me. WHATWG labels every standard they author as a “Living Standard” for good reason. It was one thing for me to learn how easy it is to introduce changes, while knowing exactly what’s going on, but it’s another for me to understand that anybody else can, and often does, make changes to the Fetch Standard!
When an update is made to the Fetch Standard, it’s not so difficult to deal with as one might imagine. The Fetch Standard always notes the last day it was updated at the top of the document, I follow a Twitter account that posts about updates, and all the history can be seen on GitHub which will show me exactly what has been changed as well as some discussion on what the change does. All of these together alert me to the fact that the Fetch Standard has been modified, and I can quickly see what was revised. If it’s relevant to what I’m going to be implementing, I update my notes to match it. Occasionally, I need to change existing code to reflect the new Standard, which is also easily done by comparing my new notes to the Fetch implementation in Servo!
From all of this, it might sound like the Fetch Standard is unfinished, or unreliable/inconsistent. I don’t mean to misrepresent it- the many small improvements help make the Fetch Standard, like all of WHATWG’s standards, better and more reliable. You can think of the status of the Fetch Standard at any point in time as a single, working snapshot. If somebody implemented all of Fetch as it is now, they’d have something that works by itself correctly. A different snapshot of Fetch is just that- different. It will have an improvement or two, but that doesn’t obsolete anybody who implemented it previously. It just means if they revisit the implementation, they’ll have things to update.
Third post over.
A-Frame is a WebVR framework that introduces the entity-component system (docs) to the DOM. The entity-component system treats every entity in the scene as a placeholder object which we apply and mix components to in order to add appearance, behavior, and functionality. A-Frame comes with some standard components out of the box like camera, geometry, material, light, or sound. However, people can write, publish, and register their own components to do whatever they want like have entities collide/explode/spawn, be controlled by physics, or follow a path. Today, we'll be going through how we can write our own A-Frame components.
Note that this tutorial will be covering the upcoming release of A-Frame 0.2.0 which vastly improves the component API.
A component contains a bucket of data in the form of component properties. This data is used to modify the entity. For example, we might have an engine component. Possible properties might be horsepower or cylinders.
Let's first see what a component looks like from the DOM.
For example, the light component has properties such as type, color, and intensity. In A-Frame, we register and configure a component to an entity using an HTML attribute and a style-like syntax:
<a-entity light="type: point; color: crimson; intensity: 2.5"></a-entity>
This would give us a light in the scene. To demonstrate composability, we could give the light a spherical representation by mixing in the geometry component.
<a-entity geometry="primitive: sphere; radius: 5"
light="type: point; color: crimson; intensity: 2.5"></a-entity>
Or we can configure the position component to move the light sphere a bit to the right.
<a-entity geometry="primitive: sphere; radius: 5"
light="type: point; color: crimson; intensity: 2.5"
position="5 0 0"></a-entity>
Given the style-like syntax and that it modifies the appearance and behavior of DOM nodes, component properties can be thought of as a rough analog to CSS. In the near future, I can imagine component property stylesheets.
Now let's see what a component looks like under the hood. A-Frame's most basic component is the position component:
AFRAME.registerComponent('position', {
schema: { type: 'vec3' },
update: function () {
var object3D = this.el.object3D;
var data = this.data;
object3D.position.set(data.x, data.y, data.z);
}
});
The position component uses only a tiny subset of the component API, but what
this does is register the component with the name "position", define a schema
where the component's value with be parsed to an {x, y, z}
object, and when
the component initializes or the component's data updates, set the position of
the entity with the update
callback. this.el
is a reference from the
component to the DOM element, or entity, and object3D
is the entity's
three.js. Note that A-Frame is built on top of three.js so many
components will be using the three.js API.
So we see that components consist of a name and a definition, and then they can
be registered to A-Frame. We saw the the position component definition defined
a schema
and an update
handler. Components simply consist of the schema
,
which defines the shape of the data, and several handlers for the component to
modify the entity in reaction to different types of events.
Here is the current list of properties and methods of a component definition:
Property | Description |
---|---|
data | Data of the component derived from the schema default values, mixins, and the entity's attributes. |
el | Reference to the entity element. |
schema | Names, types, and default values of the component property value(s) |
Method | Description |
---|---|
init | Called once when the component is initialized. |
update | Called both when the component is initialized and whenever the component's data changes (e.g, via setAttribute). |
remove | Called when the component detaches from the element (e.g., via removeAttribute). |
tick | Called on each render loop or tick of the scene. |
play | Called whenever the scene or entity plays to add any background or dynamic behavior. |
pause | Called whenever the scene or entity pauses to remove any background or dynamic behavior. |
The component's schema defines what type of data it takes. A component can
either be single-property or consist of multiple properties. And properties
have property types. Note that single-property schemas and property types are
being released in A-Frame v0.2.0
.
A property might look like:
{ type: 'int', default: 5 }
And a schema consisting of multiple properties might look like:
{
color: { default: '#FFF' },
target: { type: 'selector' },
uv: {
default: '1 1',
parse: function (value) {
return value.split(' ').map(parseFloat);
}
},
}
Since components in the entity-component system are just buckets of data that are used to affect the appearance or behavior of the entity, the schema plays a crucial role in the definition of the component.
A-Frame comes with several built-in property types such as boolean
, int
,
number
, selector
, string
, or vec3
. Every single property is assigned a
type, whether explicitly through the type
key or implictly via inferring the
value. And each type is used to assign parse
and stringify
functions. The
parser deserializes the incoming string value from the DOM to be put into the
component's data object. The stringifier is used when using setAttribute
to
serialize back to the DOM.
We can actually define and register our own property types:
AFRAME.registerPropertyType('radians', {
parse: function () {
}
// Default stringify is .toString().
});
If a component has only one property, then it must either have a type
or a
default
value. If the type is defined, then the type is used to parse and
coerce the string retrieved from the DOM (e.g., getAttribute
). Or if the
default value is defined, the default value is used to infer the type.
Take for instance the visible component. The schema property definition implicitly defines it as a boolean:
AFRAME.registerComponent('visible', {
schema: {
// Type will be inferred to be boolean.
default: true
},
// ...
});
Or the rotation component which explicitly defines the value as a vec3
:
AFRAME.registerComponent('rotation', {
schema: {
// Default value will be 0, 0, 0 as defined by the vec3 property type.
type: 'vec3'
}
// ...
});
Using these defined property types, schemas are processed by
registerComponent
to inject default values, parsers, and stringifiers for
each property. So if a default value is not defined, the default value will be
whatever the property type defines as the "default default value".
If a component has multiple properties (or one named property), then it consists of one or more property definitions, in the form described above, in an object keyed by property name. For instance, a physics body component might define a schema:
AFRAME.registerComponent('physics-body', {
schema: {
boundingBox: {
type: 'vec3',
default: { x: 1, y: 1, z: 1 }
},
mass: {
default: 0
},
velocity: {
type: 'vec3'
}
}
}
Having multiple properties is what makes the component take the syntax in the
form of physics="mass: 2; velocity: 1 1 1"
.
With the schema defined, all data coming into the component will be passed
through the schema for parsing. Then in the lifecycle methods, the component
has access to this.data
which in a single-property schema is a value and in a
multiple-propery schema is an object.
init
is called once in the component's lifecycle when it is mounted to the
entity. init
is generally used to set up variables or members that may used
throughout the component or to set up state. Though not every component will
need to define an init
handler. Sort of like the component-equivalent method
to createdCallback
or React.ComponentDidMount
.
For example, the look-at
component's init
handler sets up some variables:
init: function () {
this.target3D = null;
this.vector = new THREE.Vector3();
},
// ...
The update
handler is called both at the beginning of the component's
lifecycle with the initial this.data
and every time the component's data
changes (generally during the entity's attributeChangedCallback
like with a
setAttribute
). The update handler gets access to the previous state of the
component data passed in through oldData
. The previous state of the component
can be used to tell exactly which properties changed to do more granular
updates.
The update handler uses this.data
to modify the entity, usually interacting
with three.js APIs. One of the simplest update handlers is the
visible component's:
update: function () {
this.el.object3D.visible = this.data;
}
A slightly more complex update handler might be the light component's, which we'll show via abbreviated code:
update: function (oldData) {
var diffData = diff(data, oldData || {});
if (this.light && !('type' in diffData)) {
// If there is an existing light and the type hasn't changed, update light.
Object.keys(diffData).forEach(function (property) {
light[property] = diffData[property];
});
} else {
// No light exists yet or the type of light has changed, create a new light.
this.light = this.getLight(this.data));
// Register the object3D of type `light` to the entity.
this.el.setObject3D('light', this.light);
}
}
The entity's object3D
is a plain THREE.Object3D. Other three.js object types
such as meshes, lights, and cameras can be set with setObject3D
where they
will be appeneded to the entity's object3D
.
The remove
handler is called when the component detaches from the entity such
as with removeAttribute
. This is generally used to remove all modifications,
listeners, and behaviors to the entity that the component added.
For example, when the light component detaches, it removes the light it previously attached from the entity and thus the scene:
remove: function () {
this.el.removeObject3D('light');
}
The tick
handler is called on every single tick or render loop of the scene.
So expect it to run on the order of 60-120 times for second. The global uptime of
the scene in seconds is passed into the tick handler.
For example, the look-at component, which instructs an entity to look at another target entity, uses the tick handler to update the rotation in case the target entity changes its position:
tick: function (t) {
// target3D and vector are set from the update handler.
if (this.target3D) {
this.el.object3D.lookAt(this.vector.setFromMatrixPosition(target3D.matrixWorld));
}
}
To support pause and play, just as with a video game or to toggle entities for
performance, components can implement play
and pause
handlers. These are
invoked when the component's entity runs its play
or pause
method. When an
entity plays or pauses, all of its child entities are also played or paused.
Components should implement play or pause handlers if they register any dynamic, asynchronous, or background behavior such as animations, event listeners, or tick handlers.
For example, the look-controls
component simply removes its event listeners
such that the camera does not move when the scene is paused, and it adds its
event listeners when the scene starts playing or is resumed:
pause: function () {
this.removeEventListeners()
},
play: function () {
this.addEventListeners()
}
I suggest that people start off with my component boilerplate, even hardcore tool junkies. This will get you straight into building a component and comes with everything you will need to publish your component into the wild. The boilerplate handles creating a stubbed component, build steps for both NPM and browser distribution files, and publishing to Github Pages.
Generally with boilerplates, it is better to start from scratch and build your own boilerplate, but the A-Frame component boilerplate contains a lot of tribal inside knowledge about A-Frame and is updated frequently to reflect new things landing on A-Frame. The only possibly opinionated pieces about the boilerplate is the development tools it internally uses that are hidden away by NPM scripts.
Under construction. Stay tuned!
The last Mozilla All-Hands was at one of the hotels in the Walt Disney World Resort in Florida. Every attendee was issued with one of these (although their use was optional):
It’s called a “Magic Band”. You register it online and connect it to your Disney account, and then it can be used for park entry, entry to pre-booked rides so you don’t have to queue (called “FastPass+”), payment, picking up photos, as your room key, and all sorts of other convenient features. Note that it has no UI whatsoever – no lights, no buttons. Not even a battery compartment. (It does contain a battery, but it’s not replaceable.) These are specific design decisions – the aim is for ultra-simple convenience.
One of the talks we had at the All Hands was from one of the Magic Band team. The audience reactions to some of the things he said was really interesting. He gave the example of Cinderella wishing you a Happy Birthday as you walk round the park. “Cinderella just knows”, he said. Of course, in fact, her costume’s tech prompts her when it silently reads your Magic Band from a distance. This got some initial impressed applause, but it was noticeable that after a few moments, it wavered – people were thinking “Cool… er, but creepy?”
The Magic Band also has range sufficient that Disney can track you around the park. This enables some features which are good for both customers and Disney – for example, they can use it for load balancing. If one area of the park seems to be getting overcrowded, have some characters pop up in a neighbouring area to try and draw people away. But it means that they always know where you are and where you’ve been.
My take-away from learning about the Magic Band is that it’s really hard to have a technical solution to this kind of requirement which allows all the Convenient features but not the Creepy features. Disney does offer an RFID-card-based solution for the privacy-conscious which does some of these things, but not all of them. And it’s easier to lose. It seems to me that the only way to distinguish the two types of feature, and get one and not the other, is policy – either the policy of the organization, or external restrictions on them (e.g. from a watchdog body’s code of conduct they sign up to, or from law). And it’s often not in the organization’s interest to limit themselves in this way.
Chances are, your guess is wrong!
There is nothing more frustrating than being capable of something and not getting a chance to do it. The same goes for being blocked out from something although you are capable of consuming it. Or you’re even willing to put some extra effort or even money in and you still don’t get to consume it.
For example, I’d happily pay $50 a month to get access to Netflix’s world-wide library from any country I’m in. But the companies Netflix get their content from won’t go for that. Movies and TV show are budgeted by predicted revenue in different geographical markets with month-long breaks in between the releases. A world-wide network capable of delivering content in real time? Preposterous — let’s shut that down.
On a less “let’s break a 100 year old monopoly” scale of annoyance, I tweeted yesterday something glib and apparently cruel:
“Sorry, but your browser does not support WebGL!” – sorry, you are a shit coder.
And I stand by this. I went to a web site that promised me some cute, pointless animation and technological demo. I was using Firefox Nightly — a WebGL capable browser. I also went there with Microsoft Edge — another WebGL capable browser. Finally, using Chrome, I was able to delight in seeing an animation.
I’m not saying the creators of that thing lack in development capabilities. The demo was slick, beautiful and well coded. They still do lack in two things developers of web products (and I count apps into that) should have: empathy for the end user and an understanding that they are not in control.
Now, I am a pretty capable technical person. When you tell me that I might be lacking WebGL, I know what you mean. I don’t lack WebGL. I was blocked out because the web site did browser sniffing instead of capability testing. But I know what could be the problem.
A normal user of the web has no idea what WebGL is and — if you’re lucky — will try to find it on an app store. If you’re not lucky all you did is confuse a person. A person who went through the effort to click a link, open a browser and wait for your thing to load. A person that feels stupid for using your product as they have no clue what WebGL is and won’t ask. Humans hate feeling stupid and we do anything not to appear it or show it.
This is what I mean by empathy for the end user. Our problems should never become theirs.
A cryptic error message telling the user that they lack some technology helps nobody and is sloppy development at best, sheer arrogance at worst.
The web is, sadly enough, littered with unhelpful error messages and assumptions that it is the user’s fault when they can’t consume the thing we built.
Here’s a reality check — this is what our users should have to do to consume the things we build:
That’s right. Nothing. This is the web. Everybody is invited to consume, contribute and create. This is what made it the success it is. This is what will make it outlive whatever other platform threatens it with shiny impressive interactions. Interactions at that time impossible to achieve with web technologies.
Whenever I mention this, the knee-jerk reaction is the same:
How can you expect us to build delightful experiences close to magic (and whatever other soundbites were in the last Apple keynote) if we keep having to support old browsers and users with terrible setups?
You don’t have to support old browsers and terrible setups. But you are not allowed to block them out. It is a simple matter of giving a usable interface to end users. A button that does nothing when you click it is not a good experience. Test if the functionality is available, then create or show the button. This is as simple as it is.
If you really have to rely on some technology then show people what they are missing out on and tell them how to upgrade. A screenshot or a video of a WebGL animation is still lovely to see. A message telling me I have no WebGL less so.
Even more on the black and white scale, what the discussion boils down to is in essence:
But it is 2016 — surely we can expect people to have JavaScript enabled — it is after all “the assembly language of the web”
Despite the cringe-worthy misquote of the assembly language thing, here is a harsh truth:
You can absolutely expect JavaScript to be available on your end users computers in 2016. At the same time it is painfully naive to expect it to work under all circumstances.
JavaScript is brittle. HTML and CSS both are fault tolerant. If something goes wrong in HTML, browsers either display the content of the element or try to fix minor issues like unclosed elements for you. CSS skips lines of code it can’t understand and merrily goes on its way to show the rest of it. JavaScript breaks on errors and tells you that something went wrong. It will not execute the rest of the script, but throws in the towel and tells you to get your house in order first.
There are many outside influences that will interfere with the execution of your JavaScript. That’s why a non-naive and non-arrogant — a dedicated and seasoned web developer — will never rely on it. Instead, you treat it as an enhancement and in an almost paranoid fashion test for the availability of everything before you access it.
Sorry (not sorry) — this will never go away. This is the nature of JavaScript. And it is a good thing. It means we can access new features of the language as they come along instead of getting stuck in a certain state. It means we have to think about using it every time instead of relying on libraries to do the work for us. It means that we need to keep evolving with the web — a living and constantly changing medium, and not a software platform. That’s just part of it.
This is why the whole discussion about JavaScript enabled or disabled is a massive waste of time. It is not the availability of JavaScript we need to worry about. It is our products breaking in perfectly capable environments because we rely on perfect execution instead of writing defensive code. A tumblr like Sigh, JavaScript is fun, but is pithy finger-pointing.
There is nothing wrong with using JavaScript to build things. Just be aware that the error handling is your responsibility.
Any message telling the user that they have to turn on JavaScript to use a certain product is a proof that you care more about your developer convenience than your users.
It is damn hard these days to turn off JavaScript – you are complaining about a almost non-existent issue and tell the confused user to do something they don’t know how to.
The chance that something in the JavaScript execution of any of your dozens of dependencies went wrong is much higher – and this is your job to fix. This is why advice like using noscript to provide alternative content is terrible. It means you double your workload instead of enhancing what works. Who knows? If you start with something not JavaScript dependent (or running it server side) you might find that you don’t need the complex solution you started with in the first place. Faster, smaller, easier. Sounds good, right?
So, please, stop sniffing my browser, you will fail and tell me lies. Stop pretending that working with a brittle technology is the user’s fault when something goes wrong.
As web developers we work in the service industry. We deliver products to people. And keeping these people happy and non-worried is our job. Nothing more, nothing less.
Without users, your product is nothing. Sure, we are better paid and well educated and we are not flipping burgers. But we have no right whatsoever to be arrogant and not understanding that our mistakes are not the fault of our end users.
Our demeanor when complaining about how stupid our end users and their terrible setups are reminds me of this Mitchell and Webb sketch.
Don’t be that person. Our job is to enable people to consume, participate and create the web. This is magic. This is beautiful. This is incredibly rewarding. The next markets we should care about are ready to be as excited about the web as we were when we first encountered it. Browsers are good these days. Use what they offer after testing for it and enjoy what you can achieve. Don’t tell the user when things go wrong – they can not fix what you messed up.
This is a brown paper bag release. It turns out I managed to break the upgrade
path only 10 commits before the release.
git cinnabar fsck
doesn’t fail to upgrade metadata.remote.$remote.cinnabar-draft
config works again.More SEO-bait, after tracking down an poorly documented problem:
# buildbot start master
Following twistd.log until startup finished..
2016-01-17 04:35:49+0000 [-] Log opened.
2016-01-17 04:35:49+0000 [-] twistd 14.0.2 (/usr/bin/python 2.7.6) starting up.
2016-01-17 04:35:49+0000 [-] reactor class: twisted.internet.epollreactor.EPollReactor.
2016-01-17 04:35:49+0000 [-] Starting BuildMaster -- buildbot.version: 0.8.12
2016-01-17 04:35:49+0000 [-] Loading configuration from '/home/user/buildbot/master/master.cfg'
2016-01-17 04:35:53+0000 [-] error while parsing config file:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1155, in gotResult
_inlineCallbacks(r, g, deferred)
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1099, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-packages/buildbot/master.py", line 189, in startService
self.configFileName)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/buildbot/config.py", line 156, in loadConfig
exec f in localDict
File "/home/user/buildbot/master/master.cfg", line 415, in <module>
extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
File "/usr/local/lib/python2.7/dist-packages/buildbot/status/status_push.py", line 404, in __init__
secondaryQueue=DiskQueue(path, maxItems=maxDiskItems))
File "/usr/local/lib/python2.7/dist-packages/buildbot/status/persistent_queue.py", line 286, in __init__
self.secondaryQueue.popChunk(self.primaryQueue.maxItems()))
File "/usr/local/lib/python2.7/dist-packages/buildbot/status/persistent_queue.py", line 208, in popChunk
ret.append(self.unpickleFn(ReadFile(path)))
exceptions.EOFError:
2016-01-17 04:35:53+0000 [-] Configuration Errors:
2016-01-17 04:35:53+0000 [-] error while parsing config file: (traceback in logfile)
2016-01-17 04:35:53+0000 [-] Halting master.
2016-01-17 04:35:53+0000 [-] Main loop terminated.
2016-01-17 04:35:53+0000 [-] Server Shut Down.
This happened after the buildmaster’s disk filled up and a bunch of stuff was manually deleted. There were no changes to master.cfg since it worked perfectly.
The fix was to examine master.cfg to see where the HttpStatusPush was created, of the form:
c['status'].append(HttpStatusPush(
serverUrl='http://build.servo.org:54856/buildbot',
extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
))
Digging in the Buildbot source reveals that persistent_queue.py wants to unpickle a cache file from /events_build.servo.org/-1 if there was nothing in /events_build.servo.org/. To fix this the right way, create that file and make sure Buildbot has +rwx on it.
Alternately, you can give up on writing your status push cache to disk entirely by adding the line maxDiskItems=0 to the creation of the HttpStatusPush, giving you:
c['status'].append(HttpStatusPush(
serverUrl='http://build.servo.org:54856/buildbot',
maxDiskItems=0,
extra_post_params={'secret': HOMU_BUILDBOT_SECRET},
))
The real moral of the story is “remember to use logrotate.
Let's suppose you have a rather long document, for instance a book chapter, and you want to render it in your browser à la iBooks/Kindle. That's rather easy with just a dash of CSS:
body { height: calc(100vh - 24px); column-width: 45vw; overflow: hidden; margin-left: calc(-50vw * attr(currentpage integer)); }
Yes, yes, I know that no browser implements that attr()
extended syntax. So put an inline style on your body for margin-left: calc(-50vw * <n>)
where <n>
is the page number you want minus 1.
Then add the fixed positioned controls you need to let user change page, plus gesture detection. Add a transition on margin-left to make it nicer. Done. Works perfectly in Firefox, Safari, Chrome and Opera. I don't have a Windows box handy so I can't test on Edge.
This post has been written about the Mozilla Foundation (MoFo) 2020 strategy.
The ideas developed in this post are in different levels: some are global, some focus on particular points of the proposed draft. But in my point of view, they all carry a transversal meaning: articulation (as piece connected to a structure allowing movement) with others and consistency with our mission.
On the way to radical participation, Mozilla should be radical 1 user-centric. Mozilla should not go against the social understanding of the (tech and whole society) situation because it’s what is massively shared and what polarizes the prism of understanding of the society. We should built solutions for it and transform (develop and change) it on the way. Our responsibility is to build inclusivity (inclusion strengths) everywhere, to gather for multiplying our impact. We must build (progressive) victories instead of battles (of static positions and postures).
If we don’t do it, we go against users self-perceived need: use. We value our differences more than our commonalities and consider ethic more as an absolute objective than a concrete process: we divide, separate, compete. Our solutions get irrelevant, we get rejected and marginalized, we reject compromises that improve the current situation for the ideal, we loose influence and therefore impact on the definition of the present and future. We already done it for the good and the bad in the past (H.264+Daala, pocket integration, Hello login, no Firefox for iOS, Google fishing vs Disconnect, FxOS Notes app which sync is evernote only, …).
To get a consistent and impactful ability to integrate and transform the social understanding, there are four domains where we can take and articulate (connected structure allowing movement) action:
Our front has two sides: propose and protect. But each of them are connected and can have different strategic expressions, if our actions generate improving (progressive) curves:
Social history is a history of social values. The way we understand and tell the problem determine the solution we can create: we need, all the way long, a shared understanding. Tools and technologies are not tied, bound forever to their social value, which depends on people’s social representations that evolve over time.
An analysis of the creation process is another way to the articulation of product with people and technology.
Platforms move closer to strict ‘walled garden’ ecosystems. We need bridges from lab to home that carry different mix of customization and reliability to support the emancipation curve. We need to build pathways thought audiences and thought IT layers (content, software, hardware, distant service). We should find a convergence between customization (dev code patch to users add-ons) and reliability (self made to mass product), between first time experience, support and add-ons thought all our users’ persona by building bridges, pathways. Mozilla should find ways to integrate learning in its products, in-content, as we have code comment on code: on-boarding levels, progression from simple to high level techniques, reproducible/universal next task/skill building.
Here are the developed ideas, with more reference to our allies and detractors’ products.
First of all, I think the strategy move Mozilla is doing is the right one as it embraces more our real life. People are not defined by one characteristic, we are complex: ex. we can be pedestrian, car driver, biker, Public Transport user… we think and do simultaneously. So why Mozilla should restrict its strategy by targeting people on skills, through education, thought better material only (the Mozilla Academy program). Education, even popular education, can’t do everything for the people to build change. We need a plan that balance intellectual and practical (abstraction/action, think/do) integrating progressive paths to massively scale so we get an impact: build change.
Let’s start by some definitions based on my understanding of some Wikipedia articles. Sociology is the study of the evolution of societies: human organizations and social institutions. It is about the impact of the social dimension on humans representations (ways of thinking) and behaviors (ways of acting). It allows to study the conceptions of social relations according to fundamental criteria (structuralism, functionalism, conventionalism, etc.) and the hooks to reality (interactionism, institutionalism, regulationisme, actionism, etc.), to think and shape the modernity. Currently (and this is key for Mozilla’s positioning), the combination of models replace the models’ unity, which aims to assume the multidimensionality. There are three major sociological paradigms, including one emerging:
Consequently, Mozilla should build its strategy on historical (evolution) and sociological (human organizations, social institutions and social behaviors) analysis based on social networks (links as social interactions), in the perspective of producing commons. That is to say as an engine of transition from a model of value on its last leg (rarity capitalism) to the emerging one (new articulation of the individual and the collective: commons).
It is important and strategic to propose a sociological articulation supporting our mission and its purpose (commons) since the sociological concept (the paradigm) reveals an ideological characteristic: because it participates in societal movements made in the Society, it serves an ideal. The societal domain, what’s making society, a political object, should be a stake for Mozilla.
We should articulate ‘our real life’ with the nowadays tech challenge: how to get back control over our data at the time of IoT, cloud, big data, convergence (multi-devices/form factor)? From a user point of view, we have devices and want them convenient, easy and nice. The big moves in the tech industry (IoT, cloud, big data, convergence) free us for somethings and lock us for others. The lock key is that our devices don’t compute anymore our data that are in silos. From a developer point of view, the innovation is going very fast and it’s hard to have a complete open source toolbox that we can share, mostly because we don’t lead: Open has turn to be more open-releasing.
We should articulate our new strategy with the tech industry moves: for example, as a user, how can I get (email) encryption on all my devices? Should I follow (fragmented) different kind of howtos/tools/apps to achieve that? How do I know these are consistent together? How can I be sure it won’t brake my continuous workflow? (app silo? social silo? level of trust and reliability?)
Mozilla have the skills to answer this as we already faced and solved some of these issues on particular points: like how to ease the installation of Firefox for Android for Firefox desktop users, open and discoverable choice of search engines, synchronization across devices, …
Mozilla’s challenge is to not be marginalized by the change of practices. Having an impact is embracing the new practice and give it an alternative. Mozilla already made that move by saying « Firefox will go where users are« , by trying to balance the advertisement practice between adds companies and users, by integrating H.264 and developing Daala. But Mozilla never stated that clearly as a strategy.
If we think about the Facebook’s strategy, they first built a network of people whiling to share (no matter what they share) and then use this transversal backbone to power vertical business segments (search, donation, local market selling, …). Google with its search engine and its open source policy have a similar (in a way) strategy. The difference here is that the backbone is people’s data and control over digital formats. In both cases, the level of use (of the social network, search engine, mobile OS, …) is the key (with fast innovation) to have an impact. And that’s a major obstacle to build successful alternatives.
The proposed Mozilla’s strategy is built in the opposite way, and that’s questioning. We try to build people network depending on some shared matters. Then, is our strategy able to scale enough to compete against GAFAM, or are we trying to build a third way ?
For the products, the Mozilla’s strategy is still (and has always been) inclusive: everybody can use the product and then benefit of its open web values. A good product that answer people needs, plus giving people back/new power (allow new use) build a big community. For the network, should we build our global force of people based on concentric circles (of shared matters) or based on a (Mozilla own) transversal backbone (matter agnostic)? It seems to me the actual presentation of the strategy do not answer clearly enough this big question: which inclusivity (inclusion strengths) mechanism in the strategy?
And that call back to our product strategy: build a community that shares values, that is used to spread outcomes (product) OR build a community that shares a product, that is used to spread values. This is not a question on what matters more (product VS values) but on the strategy to get to a point, an objective (many web citizens). Shouldn’t we use our product to built a people network backbone ? Back to GAFAM: what can we learn from the Google try with Google+?
If our core is not enough transversal (the backbone), more new web/tech market there will be, more we will be marginalized, because focused on our circles center not taking in account that the war front (the context) have changed. Mozilla have to be resilient: mutability of the means, stability in the objectives.
The document is the MoFo strategy, and so it doesn’t say anything about ‘build Firefox’ (aka the product strategy) and so don’t articulate our main product (Firefox) with our main people network building effort and values sharing engine. We should do it: at a strategic scale and a particular scale (articulating the agenda-setting with main product features).
It seems that our GAFAM challengers get big and have impact by not educating (that much) people, and that’s what makes them not involved in the web citizenship. Or only when they are pushed by their customers. At the opposite, making people aware about web citizenship at first, makes it hard to have that much people involved and so to have impact. However, there is an other prism that drive people: the brand perceived values. Google is seen as a tech pioneer innovator and doing the good because of its open policy, free model, fast innovation… Facebook is seen as really cool firm trying to help people by connecting them…
Is the increase of marketing of Mozilla doing good enough to gains back users ? Is this resilient compared to the next-tech-thing coming ?
Most of the time when I meet Goggle Chrome users and ask then why they use it and don’t switch to Firefox, they answer about use allowed (sync thought devices, apps everywhere that run only on GC, …). Sometimes, they argue that they make effort on other areas, and that they want to keep they digital life simple. They experience is not centered in a product/brand, but more on the person: on that Google Chrome with its Person (with one click ‘auto-login’ to all Google services) is far superior than Firefox.
A user-agent is an intermediary acting on behalf of a supplier. As a representative, it is the contact point with customers; It’s role is to manage, to administer the affairs; it is entrusted with a mission by one or more persons; it both acts and produce an effect.
So, the user-agent can be describe with three criteria. It is: an intermediate (user/technology) ; a tool (used to manage and administrate depending on the user’s skills) ; a representative (mission bearer, values vector, for a group of people). It exceeds partly the contradiction between being active and passive.
A user-agent articulate personal-identity with technology-identity and give informations about available skills over these domains. It’s much more universal than a product that is about featuring a user-agent. If we target resilience, user-agent should be the target.
The way we look at the past and current facts shape our understanding and determine if we open new ways to solve the issues identified. That’s the way to understand the challenges that come on the way and to agree on an adaptation of the strategy instead of splitting things. The way we understand and tell the problem determine the solution we can create: we need, all the way long, a shared understanding.
Tools and technologies are not necessarily tied to their social value, which depends on social representations. The social value can be built upstream and evolve downstream. It also depends on the perspective in which we look at it, on the understanding of the action and therefore on past or current history. Example: the social value of a weapon can be a potential danger or defense, creative (liberating) or destructive. The nuclear bomb is a weapon of mass destruction (negative), whose social value was (ingeniously built as) freedom (positive).
To engage the public, before to « Focus on creative campaigns that use media + software to engage the public. » we need to step back, in our speeding world, for understanding together the big picture and the big movement.
Mozilla want to fuel a movement and propose a strong and consistent strategy. However, I think this plan miss a key point, a root point: build a common (hi)story. This should be an objective, not just an action.
Also, that’s maybe a missing root for the State of the web report: how do we understand what we want to evaluate? But it’s not only a missing root for an (annual?) report (a ‘Reporters without borders’ Press-Freedom like?), it’s a missing root for a new grow of our products’ market share.
For example, I do think that most users don’t know and understand that Mozilla is a foundation, Firefox build by a community as a product to keep the web healthy: they don’t imagine any meaning about technology, because they see it as a neutral tool at its root, so as a tool that should just fit they producing needs.
Firefox, its technologies and its features are not bound for ever. It is the narrative, and therefore their inclusion in the social history that we make, which converges Firefox with the values that it stand for. Stoping or changing the deep narrative means cutting the source of common understanding and making stronger other consistencies captured by other objects, turning as centrifugal forces for Firefox.
Marketing is a way to change what we socially say about things: that’s why Google Chrome marketing campaign (and consistent features maturity) has been the decreasing starting point of Firefox. Our message has been scrambled.
How to emancipate people in the digital world ?
Being open is not a thing we can achieve, it’s a constant process. « Mozilla needs to engage on both fronts, tackling the big problems but also fuelling the next wave of open. » Yes, but Mozilla should say too how the next wave of open can stay under people’s control and rally new people. Not only open code, but open participation, open governance, open organization. Being open is not a releasing policy about objects, it’s a mutation to participation process: a metamorphosis. It’s not reached by expanding, but by shifting. It’s not only about an amount, but about values: it’s qualitative.
Maybe open is not enough, because it doesn’t say enough about who control and how, about the governance, and says too much about availability (passive) and not enough about inclusivity (active ; inclusion strengths). It doesn’t say how the power is organized and articulated to the people (ex. think about how closed is the open Android). We may need to change the wording: indie web, the web that fuel autonomy, is a try, but it doesn’t say enough about inclusivity compared to openness & opportunity. Emancipation is the concept. It’s strategic because it says what is aligned to what, especially how to articulate values and uses. It’s important because it tells what are the sufficient conditions of realization to ‘open/indie’. That’s key to get ‘open/indie at small and large scales, from Internet people to Internet institutions, thought all ‘open/indie’ detractors in the always-current situation: a resilient ecosystem.
My intuition is that the leadership network and advocacy engine promoting open will be efficient if we clarify ‘open’ while keeping it universal. We can do it by looking back at the raw material that we have worked for years, our DNA in action. Because after all, we are experts about it and wish others to become experts too. It does not mean to essentialize it (opposing its nature and its culture), but to define its conditions of continuous achievement in our social context.
Clarifying the idea of ‘open’ is strategic to our action because it outlines the constitution of ‘open’, its high ‘rules’, like with laws in political regimes. It clarifies for all, if you are part of it or not, and it tells you what to change to get in. It can reinforce the brand by differentiating from the big players that are the GAFAM: it’s a way to drive, not to be driven by others lowering the meaning to catch the social impact. We should say that ‘open’ at Mozilla means more than ‘open’ at GAFAM. I wish Mozilla to speak about its openness, not as an ‘equal in opportunity’ but as an ‘equal in participation’, because it fits openness not only for a moment (on boarding) or for a person, but during the whole process of people’s interaction.
Rust and Servo or Firefox OS (since the Mozilla’s shift to radical participation) seem to be very good examples of projects with participation & impact centric rules, tools, process (RFC, new team and owners, …). Think about how Rust and Dart emerged and are evolving. Think about how stronger has been the locked-open Android with partnership than the open-locked FxOS. We should tell those stories, not as recipes that can be reproduced, but as process based on a Constitution (inclusive rules) that make a political regime (open) and define a mode of government (participation). That’s key to social understanding and therefore to transpose and advocate for it.
As projects compared to ‘original Mozilla’, Rust, Servo and FxOS could say a lot about how different they implemented learning/interaction/participation at the roots of the project. How the process, the tools, the architecture, the governance and the opportunities/constraints have changed for Mozilla and participants. This could definitely help to setup our curriculum resources, database and workshop at a personal (e.g., “How to teach / facilitate / organize / lead in the open like Mozilla.”) and orgs levels, with personal and orgs policies.
Clarifying the constitution of ‘open’ calls to clarify other related wordings.
I’m satisfied to read back (social) ‘movement’ instead of ‘community’, because it means that our goal can’t be achieve forever (is static), but we should protect it by acting. And it seems more inclusive, less ‘folds on itself’ and less ‘build the alternative beside’ than ‘community’: the alternative can be everywhere the actual system is. It can make a system. It can get global, convergent, continuous, … all at the same time. Because it’s roots are decentralized _and_ consistent, collaborating, …
About participation, we should think too (again) about engagement VS contribute VS participate: how much am I engaged ? Free about defining and receiving cost/gains? What is the impact of my actions ? … These different words carry different ideas about how we connect the ‘open’: spread is not enough because it diffuses, _be_ everywhere is more permanent. Applied to Mozilla’s own actions, funding open projects and leaders, is maybe not enough and there should be others areas where we can connect inside products, technology, people and organizations that build emancipation. So that say something about getting control (who, how, …).
IA is first developed to help us by improving our interactions. However, this seems to start to shift into taking decisions instead of us. This is problematic because these are indirect and direct ways for us to loose control, to be locked. And that can be as far as computers smarter than humans. The problem is that technical progress is made without any consideration of the societal progress it should made.
That’s an other point, why open is not enough: automation should be build-in with superior humanization. Mozilla should activate and unlock societal progress to build fair technical progress.
The digital (& virtual) world is gaining control over the physical world in many domains of our society (economy to finance, mail to email, automatic car, voting machine, …). It’s getting more and more integrated to our lives without getting back our (imperfect) democracy integrated into them. Public benefit and public good are turning ‘self benefit’ and ‘own sake’ because citizens don’t have control over private companies. We should build a digital democracy if we don’t want to loose at all the democratic governing of society. We must overcome the poses and postures battles about private and public. We need to build.
At some level, I’m not the only one to ask this question:
How do CRM requirements for Leadership and Advocacy overlap / differ? What’s our email management / communications platform for Leadership?
Connect leaders to lead what ? How ? To whose benefit ? Do we want to connect leaders or initiatives (people or orgs) ? Will the leaders be emerging ones (building new networks) or established ones (use they influence to rally more people)? Are Leaders leaders of something part of Mozilla (like can be Reps) or outside of Mozilla (leaders of project, companies, newspaper: tech leaders, news leaders, …) ? This is especially important depending on what is the desire for the leaders to become in the future. The MoFo’s document should be more precise about this and go forward than « Mozilla must attract, develop, and support a global network of diverse leaders who use their expertise to collaboratively advance points-of-view, policies and practices that maintain the overall health of the Internet. »
We should do it because the confusion about the leadership impact the advocacy engine: « The shared themes also provide explicit opportunities for our Leadership and Advocacy efforts to work together. » Regarding Mozilla, is the leaders role to be advocacy leaders ? It seems as they share themes and key initiatives (even if not worded the same sometimes). Or in other words, who Drives the Advocacy engine?
Here are my iterations on the definition of ‘Leaders’:
With these definitions, then Leaders are maybe more a Lab, R&D place, incubation tool (if we think about start-up incubators, then it shows a tool-set that we will need to inspire for the future). But if we want to keep the emphasis on people, we could name them ‘creators’ (compositors or managers ; not commoners, because leaders and advocates are commoners ; yes, traditionally creators are craftspersons and intellectual designers). This make sens with the examples given in the MoFo 2020 strategy 0.8 document, where all persona are involved in a building-something-new process.
However, it’s interesting to understand why we choose at first ‘Leaders’. Leaders build new solutions (products) and Advocates new voices (rallying), they are both about personal development and empower commons. Leadership=learn+create and advocacy=teach+spread commons. Leaders are projects/orgs leaders, the ones that traduce DNA (values) in products (concrete ability and availability). Advocates are values advocates, the ones that traduce DNA (values) in actions (behavior). As they are both targeting commons, they both produce the same social organization (collaboration instead of competition). They are both involved to create (different) representation (institutions) and organization (foundation/firms) but with a different DNA (values) processing: from public good to personal interest or the opposite. If Mozilla cares about public good resilience, the articulation of they domains of values is critical. So their articulation’s expression and the revision process must be said and clear: from hierarchy vs contract vs different autonomy levels (internal incubation and external advocacy), vs … to criteria to start a revision.
Another argument for the switch from Leader to Creator is that the Leader word it too much tight to a single-person-made innovation. Creator make more clear that the innovation is possible not because of one genius, but because of a team, a group, a collective: personS (where there could also be genius). The value is made by the collaboration of people (especially in an open project, especially in a network).
That’s important because that could impact how well we do the convening part: not self-promoting, not-advertising, but sharing skills and knowledge for people and catalysing projects.
The same for the wording ‘talent’: alone, a talent can do nothing that has an impact. At least, we need two talents, a team (plus some assistants at some point).
Again, this seems to be an open question:
Define and articulate “leadership.” Hone our story, ethos and definition for what we mean by “leadership development” (including cultural / localization aspects).
In my culture, Leader carry positive (take action) and negative (dominate) meanings. That’s another reason why I prefer another naming.
I understand too that it carries a lot of legitimacy (ex. market leader) in our societies and it avoids the stay-experimental or non-massive (unique) thoughts. And we need legitimacy to get impact.
But the way Mozilla has an impact thought all cultures, its legitimacy, is by creating or expanding a common. To do this, depending on the maturity, Mozilla could follow the market proposing an alternative with superior usability OR opening a new market by adding a vertical segment.
If Leadership is « a year-round MozFest + Lab« , so it’s a social network + an incubation place. Then, we already have a social network for people involved with Mozilla: Which kind of link should have the leadership network with mozillians.org ? What can we learn from this project and other specialized social network projects (linkedin, viadeo, …) to build the leadership network ?
Mozilla is doing a great effort to build its advocacy engine on collaboration (« Develop new partnerships and build on current partnerships« , « begin collaboration« , « build alliances with similar orgs« ) but at the same time affirms that Mozilla should be « Part of a broader movement, be the boldest, loudest and most effective advocates » that could be seen as too centralized, too exclusive.
While this can be consistent (or contradictory), the consistency has to be explained looking at orgs and people, global and local, abstract and real, with a complementarity/competitive grid.
First, the articulation with other orgs has to be explained. What about others orgs doing things global (EFF, FSF, …) and local (Quadrature du net, CCC, …) ? What about the value they give and that Mozilla doesn’t have (juridic expertise for example) ? What about other advocate engines (change.org, Avaaz…) ? That should not be at an administrative level only like « Develop an affiliate policy. Defining what MoFo does / does not offer to effectively govern relationships w. affiliated partners and networks (e.g., for issues like branding, fundraising, incentives, participation guidelines, in-kind resources.) »
Second, this is key for users to understand and articulate the global level of the brand engagement and their local preoccupations and engagement. How the engine will be used for local (non-US) battles ? In the past Mozilla totally involved against PIPA, SOPA by taking action, and hesitate a lot to take position and just published a blog post (and too late to gain traction and get impact) against French spying law for example.
Third, the articulation ‘action(own agenda)/reaction’ should be clarified in the objectives and functioning of the advocacy engine. Especially because other orgs, allies or detractors, try to to setup the social agenda. It’s important because it can change the social perception of our narrative (alternative promotion/issue fighting) and therefore people’s contributions.
People think the technology is socially neutral. People are satisfied of narrow hills of choice (not the meaning, the aim, but only the ability to show your favorite avatar). People don’t want to feel guilty or oppressed, they don’t want new constraints, they are looking for solution only: they want to use, not to do more, they want they things to be done. Part of the problem is about understanding (literacy, education), part of it is about the personal/common duality, part of it is about being hopeless about having an impact, part of it is about expressing change as a positive goal and a new possible way (alternative), not a fight against an issue. About the advocacy engine, I think our preoccupation should be people-centric and the aim to give them a short, medium and long term narrative to get action without being individuals-centric.
How to build a social movement ? How it has been built in the past ? Is it the same today ? Can it be transposed to the digital domain from others social domains ? How strong are the cultural differences between nations? These are the main questions we should answer, and our pivot era gives us many examples in diverse domains (climate change advocates, Syriza & Podemos, NSA & surveillance services in Europe, empowered syndicates in Venezuela, Valve corp. internal organization…) to set a search terrain. However, I will go strait to my intuitive understanding below.
I’m kind of worried that it’s imagined to build the advocacy engine themes by a top-down method. I think a successful advocacy is always built bottom-up, as its function is to give back the voice to the people, to get them involved, not to make them fulfill our predefined aims. The top-down method is too organization centric: it can’t massively drive people that are not already committed to the org. It’s usually named advertisement or propaganda. If we want to have impact, we should listen to people needs, not tell them to listen to ours. People want (first) to be empowered, not to empower an org. So let’s organize the infrastructure, set the agenda and draw the horizon (strategic understanding) participative: make people fill them with content of their experience. It seems to me it is the only way, the only successful method, if we want to build a movement, and not just a shifting moment (that could be built by the top, with a good press campaign locally relayed for example ; that’s what happen in old style politics: the aim is short term, to cleave).
Isn’t the advocacy engine a new Drumbeat ? We shifted from Drumbeat to Webmaker+web literacy to Mozilla Academy and now to Leadership plus advocacy: it could be good to tell that story now that we are shifting again and learn from it.
Mozilla should support, behave as a platform, not define, not focus. Letting the people set the agenda makes them more involved and is a good way to build a network of shared aims with other orgs, that is not invasive or alienating, but a support relationship in a win-win move. The strength comes from the all agendas sewed. So at an org level, let’s on-board allies organizations as soon as plan building-time (now), to build it together. Yes it’s slower, but much more massive, inclusive and persistent.
First, about the agenda-setting KPI for 2016, should these KPI be an evaluation of the inclusion and rank in others strategic agendas, governance systems and productions (outcome/products) ? Others org could be from different domains: political, social, economy orgs.
Then, as a wide size audience KPI, Mozilla wants « celebration of our campaigns with ‘headline KPIs’ including number of actions, and number of advocates.« . While doing this could be the right thing to do for some cultures, it could be the worst for others. I think that these KPI don’t carry a meaning for people and are too org centric. In a way, they are to generic: it’s just an amount. Accumulation is not our goal: we want impact that is the grow of articulated actions made by diverse people toward the same aim. We need our massive KPI to be more qualitative, or at least find a way to present them in a more qualitative way: interactive map ? a global to local prism that engages people for the next step ?
Selecting best practices are an appealing method when we want to have a fast and strong impact in a wide area. However, when we unify we should avoid to homogenize. The gain in area by scaling-up is always at the cost of loosing local impact because it is not corresponding to local specificities, hence to local expectations. Federating instead of scaling-up is a way to solve this challenge. So we should be careful to not use best practice as absolute solutions, but as solutions in a context if we want to transpose them massively.
It’s good to hear that we will build a advocacy platform. As we ‘had’ bugzilla+svn then mercurial (hg)+… and are going to the integrated, pluggable and content-centric (but non-free; admin tools are closed source) github (targeting more coder than users, but with a lower entry price for users still), we need to be able to have the same kind of tool for advocates and leaders. Something inspired maybe at some levels by the remixing tools we built in Webmakers for web users.
We need pathways from lab to home that carry different mix of customization and reliability to support the emancipation curve.
Users want things to work, because they want to use it. Geeks want to be able to modify a lot and accept to put their hands in the engine to build growing reliability. Advanced users want to customize their experience and keep control and understanding on working status. They want to be able to fix the reliability at a medium/low technical cost. They are OK to gain more control at these prices. Users want to use things to do what they need and want to trust a reliability maintained for them. They are OK to gain control at a no technical cost. Depending on the matter we all have different skill levels, so we are all geeks, advanced users and users depending on our position or on the moment. And depending on our aspirations, we all want to be able to move from one category to an other. That’s what we need to build: we don’t just need to « better articulate the value to our audiences« , we need to build pathways thought audiences and thought IT layers (content, software, hardware, distant service). We should find a convergence between customization and reliability, between first time experience, support and add-ons thought all our users’ persona by building bridges, pathways. So, « better articulate the value to our audiences » should not be restrained in our minds to the Mozilla Leadership Network.
Part of this is being done in other projects outside of Mozilla in the commons movement. There are many, but let’s take just one example, the Fairphone project: modularity, howtos, … all this help to break the product-to-use walls and drive appropriation/emancipation. Products are less product and brand centric and more people/user centric.
Part of this has been done inside Mozilla, like integrating learning in our products, in-content, as we have code comment on code. I think the Spark project on Firefox OS is on a promising path, even if maybe immature: it maybe has not been released mainstream because it misses bridges/pathways (on-boarding levels, progression from simple to high level techniques, and no or not enough reproducible/universal next task/skill building).
So some solutions start to emerge, the direction is here, but has never been conceived and implemented that globally, as there isn’t integrated pathways with choice and opportunity and a strategy embracing all products and technologies (platform, tools, …).
The open community should definitely improve the collaboration tools and infrastructure to ease participation.
Discourse ‘merged’ discussion channels: email+forum(+instant, messaging, … and others peer-to-peer discussion?). Stack exchange merged the questioning/solving process and added a vote mechanism to rank answers: it eased the collaboration on editing the statement and the results while staying synchronous with the discussion and keeping the discussion history. We need such kind of possibilities with discourse: capitalize on the discussion and preview the results to build a plan.
This exist in document oriented software (that added collaboration editing tools), but not that much in collaboration software (that don’t produce documents). For example, while discussing the future plan for Fx/FxOS be supported to keep track on a doc about the proposals plans + criteria & dependencies. In action, it is from this plus all the discussion taking place to that.
This is maybe something like integrating Discourse+Wiki, maybe with the need to have competing and ranked (both for content and underlaying meaning of content=strategy?) plan/page proposals. From evolving the wiki discussion page to featuring document production into peer-to-peer discussion.
There is maybe one thing that is in the shadow in this plan: what do we do when/if we (partially) fail ?
I think at least we should say that we document (keep research going on) to be able to outline and spread the outcomes of what we tried to fight against. So we still try to built consciousness to be ready for the next round.
If you see some contradiction in my thoughts, let’s say it’s my state of thinking right now: please voice them so we can go forward.
The same for thoughts that are voiced definitive (like users are): take it as a first attempt with my bias: let’s state these bias to go forward.
pyvideo.org is an index of Python-related conference and user-group videos on the Internet. Saw a session you liked and want to share it? It's likely you can find it, watch it, and share it with pyvideo.org.
This is the latest status report for all things happening on the site.
It's also an announcement about the end.
Read more… (5 mins to read)
One of releng’s big goals for Q1 is to deliver a beta via build promotion. It was great to have some tangible progress there this week with bouncer submission.
Lots of other stuff in-flight, more details below!
Modernize infrastructure:
Dustin worked with Armen and Joel Maher to run Firefox tests in TaskCluster on an older EC2 instance type where the tests seem to fail less often, perhaps because they are single-CPU or slower.
Improve CI pipeline:
We turned off automation for b2g 2.2 builds this week, which allowed us to remove some code, reduce some complexity, and regain some small amount of capacity. Thanks to Vlad and Alin on buildduty for helping to land those patches. (https://bugzil.la/1236835 and https://bugzil.la/1237985)
In a similar vein, Callek landed code to disable all b2g desktop builds and tests on all trees. Another win for increased capacity and reduced complexity! (https://bugzil.la/1236835)
Release:
Kim finished integrating bouncer submission with our release promotion project. That’s one more blocker out of the way! (https://bugzil.la/1215204)
Ben landed several enhancements to our update server: adding aliases to update rules (https://bugzil.la/1067402), and allowing fallbacks for rules with whitelists (https://bugzil.la/1235073).
Operational:
There was some excitement last Sunday when all the trees were closed due to timeouts connectivity issues between our SCL3 datacentre and AWS. (https://bugzil.la/238369)
Build config:
Mike released v0.7.4 of tup, and is working on generating the tup backend from moz.build. We hope to offer tup as an alternative build backend sometime soon.
See you all next week!
Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...
Hello, SUMO Nation!
The second post of the year is here. Have you had a good time in 2016 so far? Let us know in the comments!
Now, let’s get going with the updates and activity summaries. It will be brief today, I promise.
Not that many updates this week, since we’re coming out of our winter slumber (even though winter will be here for a while, still) and plotting an awesome 2016 with you and for you. Take it easy, have a great weekend and see you around SUMO.
As an introduction to this weekend's Firefox OS Hackathon in Paris we'll have two presentations: - Guillaume Marty will talk about the current state of...
All the first Let's Encrypt certs for my websites from the LE private beta began expiring last week, so it was time to work through the renewal tooling. I wanted a script that:
All the first Let's Encrypt certs for my websites from the LE private beta began expiring last week, so it was time to work through the renewal tooling. I wanted a script that:
The official Let's Encrypt client team is hard at work producing a great renew tool to handle all this, but it's not released yet. Of course I could use Caddy Server that just handles all this, but I have a lot invested in Nginx here.
So I wrote a short script and put it up in a Gist.
The script is designed to run daily, with a random start between 00:00 and 02:00 to protect against load spikes at Let's Encrypt's infrastructure. It doesn't do any real reporting, though, except to maintain /var/log/letsencrypt/renew.log
as the most-recent failure if one fails.
It's written to handle Nginx with Upstart's service
command. It's pretty modular though; you could make this operate any webserver, or use the webroot method quite easily. Feel free to use the OpenSSL SubjectAlternativeName processing code for whatever purposes you have.
Happy renewing!
Comenzó un nuevo año y con él, te traemos nuevos e interesantes complementos para tu navegador preferido que mejoran con creces tu experiencia de navegación. Durante los próximos 6 meses estará trabajando nuevos miembros en el Add-ons Board Team, en la próxima selección desde Firefoxmanía te avisaremos.
uMatrix es muy parecido a un firewall y desde una ventana fácilmente podrás controlar todos los lugares a donde tu navegador tiene permitido conectarse, qué tipo de datos pueden descargarse y cual puede ejecutar.
Esta puede ser la extensión perfecta para el control avanzado de los usuarios.
⇒ HTTPS Everywhere por EFF Technologists
Protege tus comunicaciones habilitando la encriptación HTTPS automáticamente en los sitios conocidos que la soportan, incluso cuando navegas mediante sitios que no incluyen el prefijo “https” en la URL.
⇒ Add to Search Bar por Dr. Evil
Hace posible que cualquier página con un formulario de búsqueda disponible pueda ser añadido fácilmente a la barra de búsqueda de Firefox.
⇒ Duplicate Tabs Closer por Peuj
Detecta las pestañas duplicadas en tu navegador y automáticamente las cierra.
A nosotros nos encantaría que fueras parte del proceso de seleccionar los mejores complementos para Firefox y nos gustaría escucharte. ¿No sabes cómo? Sólo tienes que enviar un correo electrónico a la dirección amo-featured@mozilla.org con el nombre del complemento o el archivo de instalación y los miembros evaluarán tu recomendación.
Fuente: Mozilla Add-ons Blog
The Signal Private Messenger is great. Use it. It’s probably the best secure messenger on the market. When recently a desktop app was announced people were eager to join the beta and even happier when an invite finally showed up in their inbox. So was I, it’s a great app and works surprisingly well for an early version.
The only problem is that it’s a Chrome App. Apart from excluding folks with other browsers it’s also a shitty user experience. If you too want your messaging app not tied to a browser then let’s just build our own standalone variant of Signal Desktop.
Signal Desktop is a Chrome App, so the easiest way to turn it into a standalone app is to use NW.js. Conveniently, their next release v0.13 will ship with Chrome App support and is available for download as a beta version.
First, make sure you have git
and npm
installed. Then open a terminal and
prepare a temporary build directory to which we can download a few things and
where we can build the app:
$ mkdir signal-build
$ cd signal-build
Download the latest beta of NW.js and unzip
it. We’ll extract the application
and use it as a template for our Signal clone. The NW.js project does
unfortunately not seem to provide a secure source (or at least hashes)
for their downloads.
$ wget http://dl.nwjs.io/v0.13.0-beta3/nwjs-sdk-v0.13.0-beta3-osx-x64.zip
$ unzip nwjs-sdk-v0.13.0-beta3-osx-x64.zip
$ cp -r nwjs-sdk-v0.13.0-beta3-osx-x64/nwjs.app SignalPrivateMessenger.app
Next, clone the Signal repository and use NPM to install the necessary modules.
Run the grunt
automation tool to build the application.
$ git clone https://github.com/WhisperSystems/Signal-Desktop.git
$ cd Signal-Desktop/
$ npm install
$ node_modules/grunt-cli/bin/grunt
Finally, simply to copy the dist
folder containing all the juicy Signal files
into the application template we created a few moments ago.
$ cp -r dist ../SignalPrivateMessenger.app/Contents/Resources/app.nw
$ open ..
The last command opens a Finder window. Move SignalPrivateMessenger.app
to
your Applications folder and launch it as usual. You should now see a welcome
page!
The build instructions for Linux aren’t too different but I’ll write them down, if just for convenience. Start by cloning the Signal Desktop repository and build.
$ git clone https://github.com/WhisperSystems/Signal-Desktop.git
$ cd Signal-Desktop/
$ npm install
$ node_modules/grunt-cli/bin/grunt
The dist
folder contains the app, ready to be launched. zip
it and place
the resulting package somewhere handy.
$ cd dist
$ zip -r ../../package.nw *
Back to the top. Download the NW.js binary, extract it, and change into the
newly created directory. Move the package.nw
file we created earlier next to
the nw
binary and we’re done. The nwjs-sdk-v0.13.0-beta3-linux-x64
folder
does now contain the standalone Signal app.
$ cd ../..
$ wget http://dl.nwjs.io/v0.13.0-beta3/nwjs-sdk-v0.13.0-beta3-linux-x64.tar.gz
$ tar xfz nwjs-sdk-v0.13.0-beta3-linux-x64.tar.gz
$ cd nwjs-sdk-v0.13.0-beta3-linux-x64
$ mv ../package.nw .
Finally, launch NW.js. You should see a welcome page!
$ ./nw
Our standalone Signal clone mostly works, but it’s far from perfect. We’re pulling from master and that might bring breaking changes that weren’t sufficiently tested.
We don’t have the right icons. The app crashes when you click a media message. It opens a blank popup when you click a link. It’s quite big because also NW.js has bugs and so we have to use the SDK build for now. In the future it would be great to have automatic updates, and maybe even signed builds.
Remember, Signal Desktop is beta, and completely untested with NW.js. If you want to help file bugs, but only after checking that those affect the Chrome App too. If you want to fix a bug only occurring in the standalone version it’s probably best to file a pull request and cross fingers.
Great question! I don’t know. I would love to get some more insights from people that know more about the NW.js security model and whether it comes with all the protections Chromium can offer. Another interesting question is whether bundling Signal Desktop with NW.js is in any way worse (from a security perspective) than installing it as a Chrome extension. If you happen to have an opinion about that, I would love to hear it.
Another important thing to keep in mind is that when building Signal on your own you will possibly miss automatic and signed security updates from the Chrome Web Store. Keep an eye on the repository and rebuild your app from time to time to not fall behind too much.
Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.
These release notes are also available on the git-cinnabar wiki.
Development had been stalled for a few months, with many improvements in the
next
branch without any new release. I used some time during the new year
break and after in order to straighten things up in order to create a new
release, delaying many of the originally planned changes to a future 0.4.0
release.
git push
.-f
.GIT_SSH
and GIT_SSH_COMMAND
environment variables.git cinnabar
now has a proper argument parser for all its subcommands.git cinnabar python
command allows to run python scripts or open agit cinnabar rollback
command allows to roll back togit cinnabar bundle
command allows to create mercurial bundles,Up to before this release closing in, the master
branch was dedicated to
releases, and development was happening on the next
branch, until a new
release happens.
From now on, the release
branch will take dot-release fixes and new
releases, while the master
branch will receive all changes that are
validated through testing (currently semi-automatically tested with
out-of-tree tests based on four real-life mercurial repositories, with
some automated CI based on in-tree tests used in the future).
The next
branch will receive changes to be tested in CI when things
will be hooked up, and may have rewritten history as a consequence of
wanting passing tests on every commit on master
.
This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Written by Valentin Schmitt.
Entering the CCH (Congress Center Hamburg) between Christmas and new year brings you somewhere else than Hamburg on Central European Time.
Most people you’ll meet will say they are from Internet (or the Internets, if they are funny), and for a few days you’ll live in -what a friend of mine called- Chaos Time Zone: a blurry mix of everyone’s time difference. Days are pretty shorts anyway and you’ll probably spend a lot of time under artificial light, so it won’t help your internal clock keeping on track. The organizers will gently remind you the 6,2,1 rule: 6 hours of sleep, 2 meals and 1 shower per day, that should keep you safe. You’ll probably meet a lot of great people, and will often have a hard time to decide which talk or workshop to go to.
This is the 32nd Chaos Communication Congress. Welcome, and have fun!
I looked for a screen printer, or anything to do myself a t-shirt with the message “Firefox OS is not dead!” on it, but very surprisingly regarding the variety of machines there, I couldn’t find any on site. I really wanted to do that, because most of the people I talked to about
Firefox OS answered me “But isn’t Firefox OS dead?”. I bet it won’t come as a surprise for you, as there was a lot of feedback from the community regarding what some might call “a PR disaster”. It just made it very clear to me that we (still) have to communicate a lot on this topic, and very loudly, because the tech news websites will be less likely to spread the word this time.
Once this detail (*cough*) was clarified, almost everybody I had the chance to talk to showed a lot of interested for the project, the only ones who didn’t were the hardcore Free Software enthusiasts, whom have been disappointed by Mozilla recent policy choices (like the tiles with
advertisement, or the DRM support in Firefox desktop), or the people who care less about software freedom and prefer an iPhone to a free (as in freedom) mobile OS.
Mozilla has a pretty good image in the Free Software community, and the main reason why people never tried a Firefox OS device is only because they never had the chance to do so (not many devices marketed in Europe or the US, not many ports on mainstream phones). Fortunately enough, I had some foxfooding devices to hand out. The foxfooding program had a very positive reception, most people were happy to have the chance to try the OS, participate in sending data to Mozilla, file bugs, some were eager to develop apps, and try port the OS on their favorite phone or device (the RasPi got a bunch of them very excited).
More importantly, they really asked me how to flash a device, where to find the documentation to get started, how to file a bug. The people I handed a device to planned to show it to their colleagues, friends and fellow hacktivists, and were very excited to have phone with a hardware good enough to provide a responsive experience.
“Is there a Signal app or any secure messaging app?”
“Can I use Tor?”
“Can I keep OSM maps in cache?”
“Is there an app for WhatsApp?”
These were the questions I was asked the most. It’s pretty expected that the hackers community is looking for reliable privacy tools, but I was a bit surprised by the last question that still came up several times.
An assembly is the name the Chaos Communication Congress gives to the physical place (typically a bunch of tables with a power outlet) within the CCH where people can gather to hack, share ideas and have self organized-sessions on a particular topic, or around a particular project, there were 277 registered this year.
With the Mozilla Assembly, we had several sessions (directly at the Assembly or in dedicated rooms) over these 4 days:
On average, there were around 15 Mozillians at the Assembly and a continuous flow of people from different community.
Other projects where Mozilla is involved were represented, like Let’s Encrypt, with a talk so successful that the conference room was full, and New Palmyra, for which Mozillians organized a session for 25 participants.
The hackers and makers communities have a real ethical and practical interest in a mobile or embedded OS that’s trustworthy and hackable, we bear similar values and Firefox OS is a great opportunity to strengthen the ties between us.