Nick Clifton sent out a release note a month ago that completely passed me by. Here’s the bits I found interesting:
linux/x86 targes now default to enabling the compressiong of debug sections. This can be reverted by using the –enable-compressed-debug-sections=no configure option.
There’s a –no-pad-sections command, which prevents padding sections, no-doubt helpful for the tiny embedded platform world.
GDB and GDBServer are now build witha c C++ compiler by default. Don’t know if that impacts end users much, but as a C++ developer I find it interesting.
You can now pass a negative repeat count in the ‘x’ command, to examine memory some count backwards from the current address.
Apparently there are improvements to the mechanisms provided to front ends.
new option -fconstexpr-loop-limit=<n>, which sets the maximum number of iterations in a constexpr loop.
-fstrong-eval-order forces the evaluation of member acess, array subscripting, and shipt expressions in left-to-right order, and assigments as right-to-left, as adopted for C++17. Enabled by default when using -std=c++1z.
I recently discovered powerline, thanks to a Fedora news article. Getting powerline running on your Bash terminal is completely trivial and discussed in the article. You just:
sudo dnf install powerline
Configure your shell to use the powerline daemon.
Add this to your .bashrc
if [ -f `which powerline-daemon` ]; then
add this to ~/.config/fish/config.fish:
set fish_function_path $fish_function_path "/usr/share/powerline/fish"
Configure powerline to display git information
If all you want to do is get the git branch displayed on your powerline, that’s pretty easy, see for example this excellent article. But after I discovered powerline-gitstatus, I just had to have it.
Install the powerline-gitstatus segment:
pip install powerline-gitstatus
Setup a configuration
I’ve put my powerline configuration up on github, so if you like, you can start with my configuration, and play with it from there simply by clone my powerline-configuration repository into your local .config directory. I.e.:
git clone https://github.com/spacemoose/powerline_cofiguration.git powerline
Otherwise you can copy over the default configuration and follow the directions here.
Try out your new configuration
Since this article is focused on customizing our shell prompt, we are dealing with the powerline daemon, which means we must run
when we want to see what effect our changes might have – BUT before you do that, I highly recommend running powerline-lint in case you forgot a comma somewhere.
Aah, good news. There’s a plugin that lets me post to my wordpress blog using Emacs org mode. The github repository is here. The mainpage documentation creates an unecessarily complicated picture of what has to be done to get this puppy running. On emacs 24.5.1 all I had to do was:
And hey presto, I’m up and running. I can write my posts in a first class editing evironment, using a first class markup language, and just export it with a keystroke (there are default keybindings, but I’m not using them yet). More’s possible, but for the moment I’m satisfied with that. For example, there are instructions on the github page explaining how to create a post template, and how customize your authentication, but so far I haven’t felt the need.
Now, let’s see if this leads to more posts on my part.
Since I started using org-mode some years ago, I’ve wound up using other document and prensenation tools (like LaTeX, Word, PowerPoint…) less and less. I find it the most convenient way of generating virtually any kind of document or presentation.
Now I’m working a great deal in Unicode. Org-mode itself has no difficult with unicode, but exporting to PDF goes through the LaTeX engine, and one has to do a little extra work to make sure that works seamlessly.
The simplest means I have found to getting LaTeX to interact correctly with Unicode is to use Xetex. One needs to:
Install XeTeX (use your system’s package management).
Tell org-mode to use XeTeX when generating the PDF’s from the latex files — see the elisp snippet below.
Remove inputenc and fontspec from the list of default packages that org includes when exporting LaTeX.
So I added the following to my .emacs
;; fontify code in code blocks
(setq org-src-fontify-natively t)
;; Delete inputenc and fontenc from the default packages.
(setq org-latex-default-packages-alist (delete '("AUTO" "inputenc" t)
(setq org-latex-default-packages-alist (delete '("T1" "fontenc" t)
;; Add minted to the defaults packages to include when exporting.
;; alternatively you can add these by customizing org-latex-packages-alist.
(add-to-list 'org-latex-packages-alist '("" "minted") ("" "fontspec")
;; Tell the latex export to use the minted package for source
;; code coloration.
(setq org-latex-listings 'minted)
;; Let the exporter use the -shell-escape option to let latex
;; execute external programs.
;; This obviously and can be dangerous to activate!
'("xelatex -shell-escape -interaction nonstopmode -output-directory %o %f"))
The other day I was in a discussion with some c++ developers, where one of them stated, most definitavely, that “cout is not buffered”. Now I have to admit that I was flabbergasted by how wrong this assertion was, and my first instict was to question the capabilities of the developer in question. As I look back on the vast majority of the code that I have worked with however, it’s pretty clear that most c++ developers are either unaware that cout is indeed buffered, or they are unaware of the side effect of std::endl, or they just don’t think about the impact it causes. Consider the following two lines of code:
Now, neither of these lines of code is more readable than the other. Neither is more maintainable than the other. The endl variant can however take significantly more time to execute than the \n variant. Why? Because std::endl has two effects:
It inserts a ‘\n’.
It iinserts a std::flush into the stream, flushing the buffer.
If you are using a std::endl where a ‘\n’ will do (i.e. you do not need to explicitly flush the buffer), you are creating what Sutter and Alexandrescu call a “premature pessimization” in their excellent book “C++ coding standards”. Despite this, the endl variant is much more common. Whenever I ask anyone why they are using endl’s all over the place instead of \n,, the typical answer is “well, it’s more the C++ way to do things”. That’s just not true — it’s not the C++ way to do meaningless resource comsumption.
Rule 1 is “don’t optmize prematurely.” This means you should not make your code less readable, more complex, or less maintainable for the sake of dubious performance benefits. A correllary to this rule however is “don’t pessimize prematurely”. If two variants are equally readable, equally clean, and equally maintainable, prefer the more efficient variant. This is just a question of correcting ignorance and forming good habits.
So you might be curious if this performance difference is measurable, and the answer is of course it is. You can test it yourself with the following benchmark:
namespace chrono = std::chrono;
int main ()
constexpr unsigned int numLines = 100000;
auto start = chrono::high_resolution_clock::now();
for (unsigned int i =0; i< numLines; ++i)
std::cout << "This is a prematurely pessimized line" << std::endl;
auto pess = chrono::high_resolution_clock::now();
for (unsigned int i=0; i<numLines; ++i)
std::cout << "This is not a prematurely pessimized line\n";
std::cout << std::endl; // flush the buffer so the comparison is
// only biased in favor of pessimized
auto np = chrono::high_resolution_clock::now();
double durp = chrono::duration_cast<chrono::milliseconds> (pess-start).count();
double durnp = chrono::duration_cast<chrono::milliseconds> (np - pess).count();
/// Use cerr for benchmark results, so we can redirect the noise.
std::cerr << "\n==============================\n"
<< "pessimized code took: " << durp << "ms.\n"
<< "unpessimized took : " << durnp << "ms.\n"
<< "Buffering saved: " << durp-durnp << "ms., or " << 100* (durp-durnp)/durp
<< "% speedup." << std::endl;
Compiling on gcc with -O2, I get the following results:
Output to termial:
pessimized code took: 5370ms.
unpessimized took : 4806ms.
Buffering saved: 564ms., or 10.5028% speedup.
~/Personal/Miscelaneous[master]$ ./a.out > /dev/null
Output to /dev/null
pessimized code took: 45ms.
unpessimized took : 6ms.
Buffering saved: 39ms., or 86.6667% speedup.
~/Personal/Miscelaneous[master]$ ./a.out > tmp
Output to file:
pessimized code took: 365ms.
unpessimized took : 79ms.
Buffering saved: 286ms., or 78.3562% speedup.
If you are writing code which uses output streams a lot, like logger functionality, or file output, this can make a huge difference to your resource consumption, and you’ll never see the needless waste in a profiler. So form good habits. Unless you need to flush the buffer for some reason (which in fact is a rare need unless you’re dealing with concurrency issues), prefer the \n construct.
When I was in university I attended a couple of semesters of psychology. One of things that made the strongest impression on me was a graph the professor showed of human performance vs. motivation. I googled around a bit to see if I could a copy of the curve, and I came across the following, which offers a little more interpretation than my psych prof. did, but does an excellent job of communicating the point. It comes from an excellent post at psyprogrammer.com.
As I understand it this curve comes originally from studies on factory productivity. While it’s difficult to obtain bulletproof data on more difficult to quantify tasks like programming, my take from the psych lecture was that this curve extends pretty universally across all human activity. This is not only matches against personal experience well, but seems to be pretty well accepted by the scientific community. An interesting aspect of the curve is it’s assymetric nature. Being slightly undermotivated has significantly less negative impact than being overmotivated by the same degree.
It’s an unfortunate truth that most people in managerial roles are woefully unfamiliar with this graph. They typically overwork both themselves and their employees, to the detriment of productivity, all the while taking a macho sense of pride in how hard they work. I can’t tell you how often I have heard people bragging about the excessive number of hours they work, while the rest of us groan under the weight of their emotional instability, error-prone work, and poor judgment.
Even when employees are disciplined about working reasonable hours, getting sufficient rest, and maintaining a positive work-live balance, excessive demands take a toll on productivity and this is no less true in the case of software development than it is any other task. Software development is a kind of craft, in which intellectual discipline, creative thought and disciplined craftsmanship must all be combined to provide optimal results. Managers who seek to whip their development teams up into a frenzy of panicked development wind up destroying their teams productivity.
I have all-too-often found myself in the position where management comes by the development teams on a regular basis to tell them “If we don’t develop feature X in time Y, we will go out of business. The fate of the company is in your hands!” Typically X and Y vary heavily, even within the same project, sometimes over very short time intervals. Every time I have found myself in this situation, the deadline has been missed, and the company survived. Besides resulting in skepticism and distrust in management, this has some very harmful effects.
In the best case, the programmers ignore the dire warnings, maintain a zen like attitude to their work and continue to strive to perform to the best of their abilities. In this case the only harm is destruction of trust in management. I’ve never actually seen this case, but it is a theoretical outcome. In the worst case people take the warnings/threats seriously, start hurrying production, and start sacrificing their personal time to the goals of the project. Shortcuts are taken and technical debt is accrued in an attempt to make short term goals. Developer get stressed and start to resent design discussions, make the discussions longer and less fruitful. Developers get tired and their judgment and emotional equilibrium is impaired. Stress leads to strife and illness. In their haste developers stop taking time to mull their algorithms over, and errors creep in. The time it takes to fix these haste-bred bug dwarfs the time it would have taken to calmly develop a correct implementation, which spirals the project further in stress, cynicism and despair. Stressed out workers get sick, and feel pressured to work anyway so they infect their stressed coworkers leading to more unproductivity. Eventually the project collapses under the weight of exhaustion mistakes, strife, and technical debt. Actual results tend to fall somewhere in between these two extremes, but heavily weighted towards the latter outcome.
I meditate regularly. I read books on Zen and try to set what I learn into practice. I exersize religiously, eat healthy and have a very happy relationship. Still I find myself slipping into unproductive stress levels when harangued by emotional and irrational managers. Today I spent about an hour tracking down and debugging a completely moronic error. I had written the following code:
/// if there is a next element, and it should be written, write a ",\n",
/// otherwise write a ;\n"
m_fstream << (( (i+1 < res.GetSize()) && shl::is_a_supported_type(res.GetItem(i+1).GetChoice()))?",":";")
It actually does exactly what the comment says does, but it should in fact only write a semicolon if we are at the end of input. This bug crept in under precisely the conditions I described above: a manager ranting and raving on a daily basis about how we would go out of business if this feature wasn’t implemented yesterday, and why the heck is it taking so long and so on. I was chastised for spending so long testing the code which was delivered, but the above bug wasn’t detected for two weeks, which implies that no-one noticed the error for two weeks. Since this bug prevents data from being successfully loaded to the client application this casts a bit of doubt on how urgent the update really was. In any case we still have to re-build and deploy the bug fix to the customer before he can use the feature, which means another hour or two of developer time, and a total of about 3 weeks of delay for the customer. All because I felt stressed enough to get out of my zone and make a completely stupid mistake I would never make under less hurried circumstances. Doubtless this is not the last such bug I will discover either.
All of this is an avoidable phenomena if one understands the relationship between motivation and performance. If you really care about doing your best work, you won’t let yourself get overmotivated (stressed, hurried), and will try to keep yourself in the optimum zone. Sadly this is terribly difficult if your manager is a stress junkie who perceives health productive workers as undermotivated.
When I was working for a mega corporation, one of the recurring battles that I had to fight was the standardization battle. The assumption was “if a little standardization is a good thing, then a lot of standardization must be even better”. Of course this is simply not true. Excessive standardization stifle’s creativity, decreases productivity and increases risk. Standardization is a useful tool to increase productivity where it appropriate. But just as you shouldn’t hammer in screw, there are places where you think about deregulating. not standardizing.
Recently I found myself in a conversation with a new coworker who asked me “So what are you standardizing?” (out of the blue, no context). I was a bit mystified and replied that I wasn’t standardizing anything, I was developing code. On a separate occasion he asked me if I had any ideas how we could “standardize our testing”.
One of the sad truths at my current workplace is we simply don’t have enough testing going. We need to improve our testing. We need to extend it. We need to develop it. We don’t need to standardize it though… what good would that do?
To standardize something is to take something is inhomogeneous and make it more homogeneous. This is a useful technique — for example standardizing communication protocols and power outlets has been a tremendous boon. In a software shop that is just doing chaotic testing you don’t start with the question “how can I standardize this” you start with the question “how can I improve this”. It may be that developing standards is a one of tools you use, but it may not be. In our shop we have three teams — a back-end team, a middleware team, and a front-end team.
For the back-end team we need more unit tests, automated functionality tests, and automated integration tests. We don’t really need any manual test execution. For the front-end it’s quite different. There we need a database of manually run usability tests, most of which will probably have to be executed manually. This means that even the testing schedules will likely have to be quite different, meaning standardization will probably only play a small role, although formalization might play a larger one.
So all of this went through my head when the question was asked and I replied “Standardization is wrong word. We need to improve our testing. We are working on that.”. The disturbing thing in the conversation is the realization that the word “standardization” seems to have lost its very specific meaning. I suspect that when managers and executives get in a room and someone says “We have standardized our testing”, everyone responds “bravo, well done”, rather than asking the obvious question — “what benefit did the standardization bring?”.
It’s best practice to do warning free builds. Those harmless warnings you are ignoring could be hiding an important warning that’s bugging up your code.
Recent version of GCC support disabling of specific warnings.
Sometimes we use external libraries which we trust, or must accept — we don’t want to muck around with their internals — say for example the boost libraries.
Given these points, when we include a header which creates warnings, we’d like to disable just those warnings, for just those header files. This can be done with the following statements for the GCC compiler:
This comes up often enough I cranked out a trivial bit of elisp so I can do this in emacs a bit more automatically:
; @todo let this work if we have a range too.
(defun insert-pragmas (pragma-name)
"Wrap the current line with a pragma to disable the warning."
(interactive "MWarning to disable: ")
(if (string= "" pragma-name)
(message "ignoring empty pragma name")
(insert "#pragma GCC diagnostic ignored \"-W" pragma-name "\"\n")
(insert "\n#pragma GCC diagnostic warning \"-W" pragma-name "\"\n")
I recently encountered an anti-pattern in the software world that is fundamentally a human behavior anti-pattern. I haven’t come up with a cute name for it yet, but it boils down to confusing the goal of a particular activity with the tools used to achieve the goal. In other words, the tools start to take a higher priority than the goal, and the goal suffers.
It’s easy to think of examples, in and out of the software world:
Consistency in programming style: Consistency is something to strive for in your programming style, as consistency tends to make code better organized and more readable. The fundamental goal is readable, well organized, efficient code. Consistency is a tool used achieve that goal. It’s actually a very good tool for that goal, to the point of being an indicator of how well the goal has been achieved. But it can be overdone, and it must be remembered that the goal is clean, efficient, readable code, and that trumps consistency.
Unions: I am loath to criticize unions, as unions are a very important tool for democratization and social change. Fundamentally a union is nothing more than a tool that allows working people to negotiate on an equal footing with the power elites of capitalist societies. This is inherently a good idea, and even a bad union is better than no union. Unions suffer from an undeservedly bad reputation in America thanks to a massive propaganda campaign (the Wisconsin plan, I believe it’s called). The anti union plan involves propaganda attacks on the unions at a deep cultural level over the last hundred years, and a systematic corruption and coopting of the Unions themselves. This system of corruption only works when the unions (and typically this is union organizers, not the union members themselves) allow themselves to prioritize the Union over the union, i.e. the organization over the solidarity.
Flag burning, the patriot act, extraordinary rendition, overzealous patriotism, etc: What is any of this besides valuing the symbol over what is being symbolized? Patriotism, having pride in a flag, these concepts only have value when the nation being idealized has a value worthy of being idealized and spread. What value does our flag have when we have to restrict our freedoms to prevent people from ‘desecrating’ it?
Virtually any concept in software engineering: single responsibility, small functions & classes, design patterns… Like consistency, the real value of these concepts is they provide conceptual tools which allow us to create cleaner, conciser, more flexible, understandable, robust, and maintainable code. But every single one of these concepts can be pursued to the point where it produces worse code.
Zen. Believe it or not, I’ve heard of a guy who zealously practices zen meditation, but every time he loses his cool about something (loses his zen), he gets so worked up over his failure that he becomes temperamental and intolerable for weeks.
So let us be careful to understand what the goals are, and what the means are.