On programming

(1/11) > >>

BoogieMonster (May 31, 2011, 12:16:43 PM):
While we're digressing (will try to limit the extent... ps. I failed)...

Quote from: Mefiante
As for programming, the (almost) lost art of Assembler is frowned upon as no longer relevant from many quarters.

More money for me! You cannot truly understand how a computer operates without having that assembly language->machine code->bare metal thing click in your mind. Only people who have written at least SOME assembly, and have seen a CPU diagram or two.. understand what a CPU is, and hence "how" a computer works. I agree this is sadly becoming a lost piece of understanding.

When writing fast, robust, numerically intensive solution engines for scientific or engineering applications, one can of course do this in a high-level language but the programmer has a significant advantage if s/he knows what the compiled code looks like at the CPU’s level. Compilers often blindly add bits of library code that the programmer may not even be aware of and that are unnecessary for the code in question.

Indeed but as I think you imply, I'd still write it in good C++. The difference these days between that and raw assembly are negligible in all but the most extreme cases. Your I/O operations cost a lot more than CPU cycles, which can almost be seen as irrelevant. If you can do cache or memory optimisation.... then yes, but from what I've read mere mortals are below understanding a modern cache system well enough to make it better with userland code, and usually memory access will be governed by a (relatively) expensive call into your OS anyway, which may just decide to swap your highly-hand-optimised memory access to hard-disk, unlucky. In 99.9999999% of cases, and even some cases people think they can do better, the compiler will be better. That part of the "forget assembly" argument I buy.

But then a (good) C/C++ compiler for a PIC microchip costs a buttload of money, out of the range of a hobbyist. So there I write assembly. But even on a 20Mhz PIC your code gets executed so freaking fast (no OS, no task-switching, no nothing, your code line by line at 20mhz, is actually pretty amazing) I've not yet found a situation where a PIC isn't sitting idle most of the time. - Hence they tend to build all kinds of idle-switching, power save modes, etc... into them, even if your code has to run once every 10ms the chip can still go to sleep, save some power, and wake up again in time to do it's job.

Also, CPU-specific optimisations, like effective multi-pipelining of concurrent instruction streams or instruction set extensions, are often lacking even from the best compilers.

It's (a very unfortunate) practicality thing... usually people will compile to target i686 or even earlier architectures to ensure backwards compatibility. Very seldom do you see someone custom-roll a bleeeding-edge compile for the latest-and-greatest architectural advances. Perhaps more common in the sciences, but not very common in consumer software. AFAIK Intel's compiler is the best when it comes to stuff like this (only logical), but I haven't had the need to investigate this a lot.

It comes down to an understanding of what the code does at the CPU level, which understanding can help in eliminating a host of problems and inefficiencies before they occur.

A nice example of a bad problem is memory alignment issues. If a C++ developer doesn't understand what the compiler is going to do with certain data structures, he's gonna have code that works on his machine (probably by fluke), and crash badly on another platform, and he may have no idea why.
Mefiante (May 31, 2011, 14:02:15 PM):
Permit me to derail a little more. (Maybe spawning a new thread is in order… ;D )

Much of what you say is true for most everyday applications. And of course there are the platform-independence and code maintenance issues.

You claim that IO ops are costly. This is true if your IO source/target is non-volatile storage. But many specialised scientific/engineering problems are amenable to being memory managed in such a way that misalignment wait-states, fragmented memory blocks and virtual memory access penalties are minimised or even eliminated altogether. This requires a detailed familiarity with the hardware architecture and the innards of the OS you’re coding for.

While on the subject of “good C++,” are you aware of how awfully expensive the object destructors and, especially, the constructors are? I assume you make regular use of structured exception handling too. Do you know what exception handling costs? These can be huge obstacles in badly-written numerical code.

There are certain types of specialised scientific/engineering problems whose numerical treatments are intensively repetitive, e.g. finite element, finite difference and optimisation problems. (For reference, you might like to consider that our Weather Bureau’s daily forecast run took between four and five hours on a Cray-2.) Often, one has a small set of core functions, each of which is called hundreds of millions or even billions of times during a solution run. These functions may themselves be iterative. Saving even a small percentage of the clock cycles these functions require can result in a significant saving in execution time. One admittedly extreme example resulted in a reduction from over two hours down to less than three minutes (and, as a bonus, the reworked solution engine was much less prone to pathological crashes/exceptions), but halving or even quartering the run time is not uncommon. Thus, you need to understand the nature of the problem you’re dealing with as well as the platform you’re going to solve it on.

I’m not saying that you have to write your entire program in highly optimised code. That would be a waste of much effort. However, where the nature of the problem warrants it, you should write the critical parts in carefully optimised code such as Assembler, and then link the assembled object code into the rest of your program using the high-level-language development environment (some of which allow you to make use of inline Assembler code). Experience has shown that the extra effort is worth it but it takes a practised eye to gauge this properly.

Faerie (May 31, 2011, 14:04:36 PM):
so, ummm.... How many programmers are on this forum??? :/
Mefiante (May 31, 2011, 14:09:17 PM):
Professionally, I don’t program anymore although I do write task-specific codes when there is no easy way to use existing tools. I used to write such scientific and engineering codes in the manner I described earlier, though.

Mandarb (May 31, 2011, 15:18:44 PM):
I am, at the Joburg SITP meetings it's like 80% people are in IT.
I'm not at that level that BM and Mefiante are, only ever written seriously in high level languages (mainly .Net). I can recognize Assembler and C++ when I see it, but might have some trouble reading it.


[0] Message Index

[#] Next page

Skeptic Forum Board Index

Non-mobile version of page