Echo Linux

Briefs Developer Releases Code for iPhone App

After waiting three months for Apple to approve his iPhone application, developer Rob Rhyne announced late last week that he was releasing the 1.0 code of his program on github for free (as in freedom). The heart breaking part is why he has released it:


I’m a new indie of four months and trying to feed a family. I have already poured hundreds of hours preparing Briefs for a public release while simultaneously bootstrapping my consulting business with Jeff and Dave. With the status unknown and Apple openly opposing the strategy behind Briefs, it has become hard to stay motivated.

I tend to forget iPhone applications and the like are often considered money-making. Most of the apps I download are free, and the ones I pay for are all under $5. Most are less than $1. This feels more like an entry-fee than something a person would use as a sizable income, but I know in my brain that people are doing just that.

Mr. Rhyne has been denied this, which royally sucks. I’m a bit disappointed all the way around, too. On one hand, I’m annoyed that Apple didn’t get on with approving it (and now that the code is open source, they may use that as an excuse to reject it since they already had security concerns about the software. “Now that everyone can see it, they could learn how to exploit it!”). They have denied Mr. Rhyne a source of income and drug their heels in a very inefficient and unprofessional fashion. But I’m also disappointed and a bit frustrated that open source was a last option for Mr. Rhyne, rather than first.

Will he make money this way? If yes, then why didn’t he open it up to begin with? And if not, why do so now? If he had opened it to begin with, he’d have had an income the last three months.

Since the code’s just out on github, it seems likely he’s making nothing off this money-wise. If Apple does ever approve the application, many interested will probably prefer to buy it rather than compile it themselves, but the whole thing just seems odd.

If Mr. Rhyne was committed to open source (which he isn’t, necessarily–no assumptions here, just saying if he was), he’d have released it upon completion. If he’s not, he’d continue to pursue selling it and not undercut his own work and sales.

This is the first I’ve heard of Briefs, and my first impression is that he gave up. Perhaps that’s unfair, but news of the code being released as open source isn’t exciting to me, it’s just disappointing.


Now that I’ve written all that out, let me give you a summary from a far more cogent writer on Slashdot:

One would think he could easily cross to the dark side, and release his app in the Rock store, or the Cydia store.

In fact, I would be surprised if someone doesn’t take the code, compile the app, and release it as a .deb anyways.

But using the Cydia store features the developer could still make quite a bit of money.
Sure, it limits your app to jailbroken devices, but that is a very large number of devices compared to zero as the current situation goes.

I’m sure he has his reasons and all, I am just curious what they might be.


Update:: I emailed Mr. Rhyne about this and he kindly wrote back to me. I’ll hopefully be able to write an updated post sometime today that includes many more details that help clarify this situation, and he is currently writing a far more elaborate blog post to explain the same. I’ll certainly link here once those are up :-)

Rob Rhyne Clarifies Release of Briefs Code

Bloggers, myself included, have a bad habit of punditry. We take a topic and run with it, not bothering to check facts or even ask many questions. And what questions we do ask, we generally ask of the wrong people. Having made this mistake a few times in the past, I committed myself to try and do better, so when I wrote my post full of questions yesterday I decided to write the software author and see if he would respond.

He did, quite promptly, and shared some really interesting information about his program and the release of version 1.0 on github.

Since Mr. Rhyne ended up posting the code on github in the end, I opined that he had not done so to begin with and that he hadn’t made it open source. Turns out, Briefs was open source since the beginning, but the license changed some with the most recent release (1.0) so it could be put on Apple’s App Store.

All of the code had been on github before 1.0, but Mr. Rhyne was waiting for the App Store to approve Briefs so he could synchronize what was available through iTunes with what was posted on github. He wanted the same version to be in both places, and posting it to github before it came out on the App Store would upset this balance.

That said, Mr. Rhyne is still committed to open source, and after receiving his email I would say he is more committed to open source than many of us. Another part of the delay is that he seeks to create an open source model for iPhone software. Apple’s licensing for applications they host is pretty restrictive, but Mr. Rhyne has been working with a lawyer to develop an entirely new sort of open source license. This new license would allow developers to retain their trademark and sell the brand if they so desire, and it would restrict others from just copying it to the App Store themselves and undercutting the original developer’s price, but other than that it would be wide open. Much of the language, Mr. Rhyne wrote to me, is sourced from the open MIT and BSD licenses.

Mr. Rhyne is currently writing a new blog post about this subject, and I’ll certainly link it from here when that comes out, about why he chose to not release the application on Cydia or through other channels. While he does not believe releasing the source will potentially undercut his sales, moving to Cydia simply doesn’t meet his goals. Mr. Rhyne is focused on developers and designers, and the licensing model he is pursuing will help him provide his software to those communities.

Many thanks to Mr. Rhyne for writing back to me, and I look forward to reading his much more detailed post when it comes out.

Why not to use EXT4

I’m a sucker for version numbers. If the number’s higher, that means it’s newer and better, and that means I want it.

In the case of EXT4, I assumed it was better. It’s 4, right, which is higher than 3. EXT3 is old and inferior.

Then I started getting complaints. Someone’s power went out and their computer didn’t recover–their filesystem was corrupted. An update broke everything. Problems pile upon problems.


Turns out that the anecdotes are due to a change in how EXT4 works, and in what I would call a stupid, stupid flaw in its implementation.

Lifehacker ran an article about converting from EXT2 or EXT3 to EXT4, and the second comment outlined this problem perfectly:

EXT4′s default write mode makes it complete garbage. The reason it’s fast is that it doesn’t actually write data out to the disk for up to several minutes after you “save” something- but it writes out metadata saying it already did it much sooner. If your computer isn’t on a UPS or has a flaky closed-source video driver, that’s just asking for trouble.

Quote from Linus Torvalds:
“if you write your metadata earlier (say, every 5 sec) and the real data later (say, every 30 sec), you’re actually more likely to see corrupt files than if you try to write them together… This is why I absolutely detest the idiotic ext3 writeback behavior. It literally does everything the wrong way around — writing data later than the metadata that points to it. Whoever came up with that solution was a moron. No ifs, buts, or maybes about it.”

EXT4 uses similar writeback functionality, but even less cautiously, than EXT3. EXT3 at least doesn’t default to that mode. I definitely will not be using EXT4 in that mode, ever. It completely defeats the whole point of metadata- to record whether the data was written successfully in the first place.

Ubuntu’s & Debian’s package manager is now incredibly slow because they had to hack in a flush to disk after every single package operation to ensure that the filesystem actually did what the metadata says.

Are you using EXT4? How’s it working for you? I’ve had a power outage and recovered fine (Always use an UPS kids, but when your cat steps on the power switch, it won’t matter), but I’ve also had an Ubuntu 10.4 update break my system and require a boot disc and some black magic to get it working again. All-in-all, Ubuntu’s letting me down a bit this year.

Learning Security Through ‘Damn Vulnerable Linux’

One of the best ways to learn about computer hardware is to take one apart and put it back together. Just the same, the best way to start learning about Linux security is to start looking into how to break it.


Damn Vulnerable Linux aims to teach by example. Its example: don’t do this.

I’m not much of a security guy, and I’m certainly not a programmer, but Ido have some interest in these things. I think DVL might give me an excellent starting place to begin learning, so I’ll be installing it in the coming weeks and playing around. Anyone else interested in giving it a try with me? Head on over to the forum and drop me a line–I’ll post progress updates there.

Open Core; Open Source; Open Season

While reading up on the debate about WordPress themes, I stumbled onto a tangential but wider-range debate: Open Source vs. Open Core. The difference (for those of you like me who had never heard the term “Open Core”) is that the former provides all access to source code and lets you do what you want with the software (within the framework of its license), and the latter has open source and licensed components at its core but then has proprietary components and services that one purchases from the company licensing the software.


Think of it like a pineapple. I can give you a pineapple, and you can do whatever you want with that pineapple. The pineapple is open source.

But eating the pineapple is hard. It’s tough and difficult to hold, and while you might be able to figure out how to deal with it eventually, it’s generally too much of a bother. So you buy something else to make it easier: your fruit seller has a bundle that includes a cutting board and a knife. The Pineapple Bundle has something at its core with which you can do as you please–if you want to give it away, or plant the seeds and make more pineapples, or whatever, that’s fine. But the knife and cutting board has restrictions: you have to keep it near the fruit seller’s, can’t give it away, and definitely can’t take it into banks or airports.

Pineapple = Open Source

Pineapple Bundle = Open Core

OK, so now that I understand a bit better what’s going on, what’s all the fuss about? Branding, says Dave Neary.

When you get down to it this is a fight over branding – which is why the issue is so important to the OSI folks (who are all about the brand). I don’t actually care that much how SugarCRM, Jahia, Alfresco et al make the software they sell to their customers. As a customer I’m asking a whole different set of questions to “is this product open source?” I want to know how good the service and support is, how good the product is, and above all, does it solve the problem I have at a price point I’m comfortable with. The license doesn’t enter into consideration.

Mr. Neary links to a few other blogs, all interesting reads, and as the observations develop it does seem to come down to perspective: are you approaching this debate from the perspective of a customer, or a developer/business owner? If you’re a customer, you often don’t care about the license. You might get warm fuzzies from the thought of “Open Source,” just like we love everything labeled as “green.” But really, we want to know if the product improves our lives in some way.

If you’re a developer, you want to make sure you have the ability to do the work you want to do, and you want to make sure everyone is being honest and fair in their word choice. I was pretty pissed about Microsoft’s “Open Office XML” format, whichimplies one thing but gives another. “Open Core” dilutes the brand of “Open Source” and just makes things more difficult.

So which side are you on? Does the label “Open Core” contribute or detract? Is it real and helpful, or just a marketing scheme? Share your thoughts in the comments section below.

If you liked this post, please share it with all your friends!

Switching to Google Apps for Small Businesses

What do you use for your business email account? For a few years now, I’ve had email through my web host, but I never liked it much. The web client was pretty poor, and it lacked crucial features like anti-spam. I could access my email through a mail client such as Microsoft Outlook or Mac Mail, and I could forward it to Gmail if I wanted, but the experience was never that great.

Google Apps has been available for educational institutions and other businesses for a while now, but I’ve never looked into it much. First, I assumed it was for larger businesses, and provided functionality I didn’t really need. Second, I thought it cost money. Turns out neither is quite true.

Google Apps is Free for Small Businesses

Hit the front page of Google Apps and it immediately says that it’s $50 per user per year. My current email is bundled with my web hosting, so why bother? Because that $50 is for premier Google Apps. Check out “Standard” by hovering over Apps Editions and then selecting Standard.

With the Standard setup, you can set up email for up to 50 users. You’re limited to 7 gb of email and don’t have some of the advanced stuff that Google offers, but you still get calendars, docs, and sites. For a small business like SilverPen where the only employee is myself (for now, muahaha) it’s just perfect.

Google Apps Gives Great Benefit

The support documentation is great. I tried to think of a better way to introduce that statement and couldn’t. Even if you’re not much of a techie, these docs will help you get set up.

If you don’t already have a site, you can get a domain and a site set up through one of Google’s partners and Google Apps will be automatically configured. If you do already have one, their documentation will help you get everything set up.

In the end, you can set up your mail web access to go through your domain (I can now access my email through!) and have everything sent to your regular address go straight to Gmail, but it’s not using forwarding. Instead, Google will have you update your mail exchange settings with your web host so email is processed differently. This will make sure spam filters work properly and everything comes through more quickly.

Why use Google Apps instead of Gmail?

For a small business, Google Apps gives me the flexibility and control that my web host affords, combined with the power and usability of Google.

First, my email address is through my domain instead of through Gmail’s domain. It’s much more professional to tell someone to email rather than

Second, I can still set up additional accounts for other people. I’m limited to 50 with the free, standard edition of Google Apps, but that should be more than enough for me. I might want to set up an email for another writer or consultant who works through SilverPen sometime in the future, and that’s not something you can do with regular Gmail.

Try it out for free!

Give it a whirl and let me know if you have any questions.

And if it’s not for you, what do you use instead? Let us know what you prefer and why!

If you liked this post, please share it with all your friends!

Do you game on Linux? Why?

There’s a decently large community of people who game on Linux, myself included, but as I struggled to get Dragon Age to install, stabilize shadows, and stop spinning uncontrollably every time I touched my mouse, I wondered, “Why?’


I began gaming on Linux out of hatred for Windows Vista. Working at Missouri State University, I had an opportunity to start using Vista over a year before it was released, and the more I worked with it and the more I read, the more I disliked it. It wasn’t so much the performance of the operating system (the first release candidate was actually pretty good), but the DRM built into the operating system. The business model on which Vista was founded bothered me.

I decided that I wouldn’t buy it, and instead would transition to Linux. I’d been interested in it for a while, and I had used it on web servers, so it wasn’t completely foreign to me. It would be good to learn it for work as well, I reasoned, so I switched 100%. Over the course of two years, I learned to love and prefer it.

But it’s true that things don’t often Just Work. So I’m curious, Internet: if you use Linux regularly and even game on it, what motivates you?

If you liked this post, please share it with all your friends!

Why do you use Windows?

Bit of a change in today’s question, but I’m interested in specifics here. What got you started using Windows? What keeps you using Windows?

I grew up using Microsoft Windows, and it was just natural. To my mind, there was nothing but Windows. I knew of Apple in a theoretical sense, but they seemed as worthless to me as Packard Bells. They weren’t Windows; what was the point?


Once I began my career with Computer Services at Missouri State University, I was introduced to alternatives. Co-workers used Linux, and we had several servers that ran the same. More friends suddenly had Macs. I thought I ought to learn more about these alternatives, so I started teaching myself.

When I got hired full-time, I decided to prioritize this part of my computer education and began using Linux 100%. I installed it on my home computer as my sole operating system, and I was able to do the same at work. Even in a complete Microsoft shop, I was able to switch to Linux about 90% for two years. I had to run a Windows virtual machine for Microsoft Outlook and our service ticket system, but other than that I didn’t stray from Linux.

I’m not the sort of person who thinks Linux is the cure to all the world’s woes. I think we have to use the right tool for the right job, and sometimes that’s Windows, or Linux, or Mac. It just depends on the job being done. But I was surprised how little I neededWindows.

Now I use Windows at work for screencast software like Camtasia (there really is no good Linux alternative right now), as well as some other office productivity and compatibility. At home I use Mac OS X for writing and socializing, and have a Linux box for testing and gaming. There’s nothing right or wrong about any of these in this context–they’re just different tools to help me work more efficiently depending on the task that needs done.

So why do you use Windows? What does it provide that keeps you from investigating alternatives?

If you liked this post, please share it with all your friends!

Buildroot 2011.11 released: details on new features

As planned, Buildroot 2011.11 has been released at the end of November. You can download this release as a tarball or through the Git repository.

This release brings a set of new features on which I thought it would be nice to give some details.

The file and local site method

Each package in Buildroot defines from where the source code for the particular component being built is fetched. Buildroot has of course always supported fetching a tarball from HTTP of FTP servers. Later on, Buildroot has added support for fetching from Git, Subversion and Bazaar repositories, for example by doing:



MYPKG_SITE = git://

The <pkg>_SITE_METHOD variable allows to define the fetching method. When not specified, Buildroot tries to guess it from the <pkg>_SITE value. Of course, in ambiguous cases such as Subversion or Git repositories over HTTP (as shown in the first example), the <pkg>_SITE_METHOD must be specified.

This new version of Buildroot brings two new site methodsfile and local.

The file site method allows to specify the location of a local tarball as the source code for the component to be built. For example:

MYPKG_SITE = /opt/software/something-special-1.0.tar.gz

This can be useful for internal software that isn’t publicly available on a HTTP or FTP server or in a revision control system. This new site method was added by David Wagner, who has been an intern at Free Electrons between April and September this year.

The new local site method allows to specify the location of the source code to be built as a local directory. Buildroot will automatically copy the contents of this directory into the build directory of the component and build it from here. This is very useful because it allows to version control your source code as you wish, make changes to it, and easily tell Buildroot to rebuild your component. Note that the copy is using rsync so that further copies are very fast (see the pkg-reconfigure and pkg-rebuild targets below). An example of using the local site method:

MYPKG_SITE = /opt/software/something-special/

This new site method has been implemented by myself, as the result from my experience of using Buildroot with various Free Electrons customers.

The source directory override mechanism

The local site method described above is great for packaging special components that are specific to the embedded device one is working on, like the end-user application, or special internal libraries, etc.

However, there are cases where it is needed to work with a specific version of an open-source component. This is typically the case for the Linux kernel or the chosen bootloader (U-Boot, Barebox) or with other components. In that case, one may want to keep using Buildroot to build those components, but tell Buildroot to fetch the source code from a different location than the official tarball of the component. This is what thesource directory override mechanism provide.

For example, if you want Buildroot to use the source code of the Linux kernel from/opt/project/linux/ rather than download it from a Git repository or as a tarball, you can write the following variable definition in a board/company/project/ file:

LINUX_OVERRIDE_SRCDIR = /opt/project/linux

Then, you reference this file through the BR2_PACKAGE_OVERRIDE_FILE option, in Build options -> location of a package override file. When building the Linux kernel, Buildroot will copy the source code from /opt/project/linux into the kernel build directory,output/build/linux-VERSION/ and then start the build process of the kernel.

Basically, this mechanism is exactly like the local site method described previously, except that it is possible to override the source directory of a package without modifying the package .mk file, which is nice for open-source packages supported in Buildroot but that require local modifications.

To summarize, here is my recommendation on how to use Buildroot for packages that require project-specific modifications:

  • You are using an existing open-source component on which you make some tiny bug fixes or modifications. In this case, the easiest solution is to add additional patches to the package directory, in package/<thepackage>/.
  • You are using an existing open-source component, but are making major changes to it, that require proper version control outside of Buildroot. In this case, using the source directory override feature is recommended: it allows to keep the Buildroot file unmodified while still using your custom source code for the package.
  • You have project-specific libraries or applications and want to integrate them in the build. My commendation is to version control them outside of Buildroot, and then create Buildroot packages for them using the local site method. Note that in thepkg_SITE variable, you can use the $(TOPDIR) variable to reference the top source directory of Buildroot. I for example often use MYAPP_SITE = $(TOPDIR)/../myapplication/.

The <pkg>-rebuild and <pkg>-reconfigure targets

For a long time, when one wanted to completely rebuild a given package from scratch, a possibility was has been to remove its build directory completely before restarting the build process:

rm -rf output/build/mypackage-1.0/

Or, using the -dirclean target available for each package:

make avahi-dirclean

As these commands remove completely the build directory, the build process is restarted from the beginning: extracting the source code, patching the source code, configuring, compiling, installing.

In 2011.11, we have added two new per-package targets to make it easy to use Buildroot during the development of components:

  • make mypkg-reconfigure will restart the build process of mypkg from the configuration step (the source code is not re-extracted or repatched, so modifications made to the build directory are preserved)
  • make mypkg-rebuild will restart the build process of mypkg from the compilation step (the source code is not re-extracted or repatched, the configuration step is not redone)

So, a typical usage could be:

emacs output/build/mypkg-1.0/src/foobar.c
make foobar-rebuild

However, beware that all build directories are removed when you do make clean, so the above example is only useful for quick testing of changes.

The case where the -reconfigure and -rebuild are really useful is in combination with the local site method or the source override directory mechanism. In this case, whenpkg-reconfigure or pkg-rebuild is invoked, a synchronization of the source code is done between the source directory and the build directory is done before restarting the build.

Let’s take the example of a package named mypkg for which package/mypkg/mypkg.mkcontains:

MYPKG_SITE = /opt/mypkg

Then, to work on your package, you can simply do

emacs /opt/mypkg/foobar.c    # Edit as usual your project
make mypkg-rebuild           # Synchronizes the source code from
                             # /opt/mypkg to the build directory
                             # and restart the build

Integration of real-time extensions

In this 2011.11, an interesting addition is the integration of the Xenomai and RTAI real-time extensions to the Linux kernel. The Xenomai integration was initially proposed by Thomas de Schampheleire and then extended by myself, and I have also added the RTAI integration. This integration allows to seamlessly integrate the kernel patching process and the compilation of the required userspace libraries for those real-time extensions.

Conversion of the documentation to asciidoc

Back in 2004, one of my first contribution to Buildroot was to start writing documentation. At the time, the amount of documentation was small, so a single and simple HTML document was sufficient. Nowadays, Buildroot documentation has been extended significantly, and will have to be extended even further in the future. The approach of a single raw HTML document was no longer up to the task.

Therefore, I have worked on converting the existing documentation over to the asciidocformat. This allows us to split the source of the documentation in several files, for easier edition, and allows to generates a documentation in multiple formats: single HTML, split HTML, raw text or PDF.

Just run make manual in Buildroot 2011.11 to generate the manual. Note that the version available on the website is still the old HTML version, but it should soon be updated to the new asciidoc version.

Free Electrons contributions

Free Electrons has again contributed to this Buildroot release:

$ git shortlog -sen 2011.08..2011.11 | head -12
   126	Peter Korsgaard
   104	Gustavo Zacarias
    62	Thomas Petazzoni, from Free Electrons
    27	Yann E. MORIN
    21	Sven Neumann
    13	Yegor Yefremov
    10	Thomas De Schampheleire
     7	H Hartley Sweeten
     5	Frederic Bassaler
     4	Arnout Vandecappelle (Essensium/Mind)
     4	Maxime Ripard, from Free Electrons
     3	Baruch Siach

Our contributions have been:

  • Implementation of the source directory override mechanism
  • Implementation of the local and file site methods
  • Implementation of the pkg-rebuild and pkg-reconfigure targets
  • Conversion of the documentation to asciidoc and documentation improvements
  • Various improvements for external toolchain support: optimization of the toolchain extraction and copy (reduced build time), integration of the support of the CodeSourcery x86 toolchains, update of all CodeSourcery toolchains to the latest available versions
  • Removed useless arguments from the CMAKETARGETS, AUTOTARGETS and GENTARGETS macros, used by all packages in Buildroot. Instead, such pieces of information are automatically figured out from the package .mk file location in the source tree
  • Added the cifs-utils package (for mounting CIFS network filesystems), the libplayerpackage, the picocom package.
  • Cleanup, improve and merge the Xenomai integration done by Thomas de Schampheleire, and implement the RTAI integration
  • Did a lot of cleanup in the source tree by creating a new support/ directory to contain various tools and scripts needed by Buildroot that were spread over the rest of the tree: the kconfig source code, the special libtool patches, various scripts, etc.

Next release cycle and next Buildroot meeting

The next release cycle has already started. After the meeting in Prague, it was decided that Peter Korsgaard (Buildroot maintainer) would maintain a next branch between the -rc1 and the final version of every release, in order to keep merging the new features for the next release while debugging the current release. This next branch for 2012.02 has already been merged. For example, the addition of the scp and Mercurial site methods has already been merged for 2012.02, as well as numerous other package updates.

On my side, besides usual package updates, I’d like to focus my work for this 2012.02 cycle on improving the testing coverage and on improving the documentation. My colleague Maxime Ripard is working on integrating systemd into Buildroot, as an alternate init mechanism.

The Buildroot community will also be organizing its next meeting in Brussels, on Friday February, 3rd 2012, right before the FOSDEM conference. Buildroot users and developers are invited to join, just contact us through the Buildroot mailing list.

mkenvimage: a tool to generate a U-Boot environment binary image

Many embedded devices these days use the U-Boot bootloader. This bootloader stores its configuration into an area of the flash called the environment that can be manipulated from within U-Boot using the printenvsetenv and saveenv commands, or from Linux using the fw_printenv and fw_setenv userspace utilities provided with the U-Boot source code.

This environment is typically stored in a specific flash location, defined in the board configuration header in U-Boot. The environment is basically stored as a sequence of null-terminated strings, with a little header containing a checksum at the beginning.

While this environment can easily be manipulated from U-Boot or from Linux using the above mentioned commands, it is sometimes desirable to be able to generate a binary image of an environment that can be directly flashed next to the bootloader, kernel and root filesystem into the device’s flash memory. For example, on AT91 devices, the SAM-BA utility provided by Atmel is capable of completely reflashing an AT91 based system connected through the serial port of the USB device port. Or, in factory, initial flashing of devices typically takes place either through specific CPU monitors, or through a JTAG interface. For all of these cases, having a binary environment image is desirable.

David Wagner, who has been an intern with us at Free Electrons from April to September 2011, has written a utility called mkenvimage which just does this: generate a valid binary environment image from a text file describing the key=value pairs of the environment. This utility has been merged into the U-Boot Git repository (see the commit) and will therefore be part of the next U-Boot release.

With mkenvimage you can write a text file uboot-env.txt describing the environment, like:

bootcmd=tftp 22000000 uImage; bootm

Then use mkenvimage as follows:

./tools/mkenvimage -s 0x4200 -o uboot-env.bin uboot-env.txt

The -s option allows to specify the size of the image to create. It must match the size of the flash area reserved for the U-Boot environment. Another option worth having in mind is -r, which must be used when there are two copies of the environment stored in the flash thanks to the CONFIG_ENV_ADDR_REDUND and CONFIG_ENV_SIZE_REDUND. Unfortunately, U-Boot has chosen to have a different environment layout in those two cases, so you must tell mkenvimage whether you’re using a redundant environment or a single environment.

This utility has proven to be really useful, as it allows to automatically reflash a device with an environment know to work. It also allows to very easily generate a different environment image per-device, for example to contain the device MAC address and/or the device serial number.

Recent Comments