Originally posted by zeeblebotYou know, it's funny. At a company that shall remain unnamed, our optics lab results all went through one computer with a 140 gb hd. The problem there was all the labs went to that comp and we were feeding it like at least 40 gigs a day. So every few days the IT guys had to go back and suck off old data so we could get our optics lab up an running again.
dee-dum-dum-dee-dee .... dum-dum-dee-dee-dum-dum-dee-DEE-dee-dee-dee-deeeeee .... dum-dum-dee-DEE-dee-dee-dee-DEE-dee-dee-dee-dee ... DUM-DUM-dee-DEE ....
http://thedailywtf.com/Articles/That-Wouldve-Been-an-Option-Too.aspx
We have these weekly company meetings with the CEO and such. One day, I brought up the point that all our lab data were being funneled through this obsolete HD and made the point that if that HD took a dump, all that data would be trashed.
I made another point, just replace that piece of poop hd with a new TB drive which I think cost about 150 bucks at the time. CEO gave that to one of his managers. So I guess you can figure out if we got a new HD......
Originally posted by zeeblebotYes I looked that up after making the previous comment.
http://en.wikipedia.org/wiki/Wine_%28software%29
It implies that not all games will run smoothly (and I have a lot of games). Nevertheless I will consider it, but it might be hard explaining to my son that he cant play his favorite game because wine doesn't yet support it.
It might actually be the best route to DirectX 10 as I am still on XP because I can't afford Windows 7.
Hard to say for this application in particular, but there are often gains for de-bloating applications like this beyond just memory usage.
Performance, for example: It takes time to turn on/off all those bits. If you're not using twice the memory, you're not USING twice the memory.
Maintainability, for another. If they actually were able to reduce the memory usage by 50% then odds are the code was just a mess, and they refactored it making it easier to maintain and hopefully will improve their insti-patch dilemna.
It drives me nuts that people feel so free to write bloatware these days. And then they wonder why it takes them 3 minutes to boot their computer, or a minute to open MS word.
Originally posted by joneschrA programmer must find the correct balance. Enhancing the performance of an application takes time, (and money and effort), and may have both benefits and disadvantages. Quite often the best performance requires making the code more complicated - not necessarily more maintainable as you suggest.
It drives me nuts that people feel so free to write bloatware these days. And then they wonder why it takes them 3 minutes to boot their computer, or a minute to open MS word.
Its just not economically viable in most cases to squeeze every bit of performance out of an application.
That even applies to MS Word. If Microsoft spend another billion dollars and two years optimizing Word, it would take half the memory, run twice as fast, cost you twice as much and not sell because it would be two years behind the competition.
In my line of work, my applications are constantly changing and may not be used a lot or be discontinued. It is only worth optimizing when speed is really an issue. And even then, tweaking the Java memory settings may result in greater benefits than days of optimization.
Originally posted by twhiteheadAbsolutely. Hence the First Rule of Software Optimisation
A programmer must find the correct balance. Enhancing the performance of an application takes time, (and money and effort), and may have both benefits and disadvantages. Quite often the best performance requires making the code more complicated - not necessarily more maintainable as you suggest.
"Don't do it".
(The Second Rule is for use by experts, and says "Don't do it yet" )
I have always had a theory about large computing projects like seti@Home and other such processor intensive projects.
If the results are not urgent, simply wait a year and you will be able to get it done in half the time.
Wait 20 years and what would have taken 1 million computers a month can now be done on 1 computer in a day.
With other projects, such as rosetta@home, they are doing science immediately based on the feedback and the sooner we cure cancer the better, so it might pay off not to wait. But I am not sure. If they saved up their money for 10 years, then used the skills and computing power then they might achieve more overall as bang for the buck potentially doubles every year.
Originally posted by twhiteheadI agree with most of what you say, there are certainly cases that additional complexity is needed to improve performance.
Quite often the best performance requires making the code more complicated - not necessarily more maintainable as you suggest.
But I disagree that it's the common case. I personally find the more common case is that programs perform like dogs because they lack simplicity, not complexity.
Behold the lowly bubble sort which outperforms on small data sets.
Or consider also the case of the Linux kernel, who Linus himself acknowledges suffers from performance problems due to bloat:
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/
Mtthw is exactly right. Don't do it yet - because when you bloat your program with attempts to do so, you end up having the opposite effect.
Originally posted by joneschrIn my experience, optimizing normally consists of the following steps:
But I disagree that it's the common case. I personally find the more common case is that programs perform like dogs because they lack simplicity, not complexity.
1. Implement caching where possible - extra complexity.
2. Implement compression where possible - extra complexity.
3. Use various coding methods that reduce the number of instructions executed such as moving methods inline or optimizing the source code - extra complexity.
4. Customize the code for the specific hardware - extra complexity.
5. Implement improved algorithms such as sorting methods etc - extra complexity.
6. Instead of using the code / html produced by the various tools, edit it directly to remove all the bloat the tools produce - extra complexity.
Remember that the extra complexity I am referring to my not always be in the actual code but in the maintainability of the code.
Behold the lowly bubble sort which outperforms on small data sets.
And thats simpler than what? I have never implemented bubble sort in preference to another method. If I ever did, it would probably be adding complexity because I would be coding it myself instead of using a built in utility method.
Or consider also the case of the Linux kernel, who Linus himself acknowledges suffers from performance problems due to bloat:
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/
That article doesn't indicate that simplifying the kernel would be the solution. It would be things like assembly code optimization that would have the most impact. There may be code duplication that could be dealt with but that is likely to affect size more than performance.