[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]

A study at the Software Engineering Laboratory - a cooperative project of NASA, Computer Sciences Corporation, and the University of Maryland - found that extensive computer use (edit, compile, link, test) is correlated with low productivity. Programmers who spent less time at their computers actually finished their projects faster. The implication is that heavy computer users didn't spend enough time on planning and design before coding and testing (Card, McGarry, and Page 1987; Card 1987).
[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]

I want to say only one sentence about software design.

"Software design starts from and ends with [ separating things that will be changed with high possibility, from others that will not. ]".

In macro point of view, software requirement is main subject to classify. In micro point of view, function parameter, algorithm, data structure and so one can be targets.

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]
 

Refactoring itself requires lots of costs. Especially, verifying result - refactored software - is extremely expensive. So, to minimize this costs, auto-test-system is essential. Don't underestimate the costs. Usually, making auto-test-system can save costs more than expecting.

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]
 

This id 100% personal opinion...
Step of progress in terms of programming skill(excluding co-working skill).

1. Coding with giving attention to the language syntax.
2. Coding his/her own thought quickly. (number of code line per hour is a lot. But, very buggy & poor design)
3. Realizing difficulty of debugging. So, coding considering debugging. initial speed becomes slower than (2)
4. Realizing software design. Coding considering design too. (slower than (3))
5. Knowing design techniques and importance of interface design, but immature. So, software is tended to be over-engineered. Lots of time is spent on designing over-engineered-software, interface and so on. In this stage, programmer knows about "What should be considered for good software". But he/she doesn't have any idea/experience about solutions. That's why initial speed is very slow.
6. Being mature. Becoming faster step by step.

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]

[Extracting from Wikipedia]

Software development

Part of this section is from the Perl Design Patterns Book.

In the software development process, the top-down and bottom-up approaches play a key role.

Top-down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. The Top-Down Approach is done by attaching the stubs in place of the module. This, however, delays testing of the ultimate functional units of a system until significant design is complete. Bottom-up emphasizes coding and early testing, which can begin as soon as the first module has been specified. This approach, however, runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of the bottom-up approach.[citation needed]

Top-down design was promoted in the 1970s by IBM researcher Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top-down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top down programming was not strictly what he promoted. Top-down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be utilized.

Modern software design approaches usually combine both top-down and bottom-up approaches. Although an understanding of the complete system is usually considered necessary for good design, leading theoretically to a top-down approach, most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom-up flavour. Some design approaches also use an approach where a partially-functional system is designed and coded to completion, and this system is then expanded to fulfill all the requirements for the project

Programming

Top-down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. Eventually, the components are specific enough to be coded and the program is written. This is the exact opposite of the bottom-up programming approach which is common in object-oriented languages such as C++ or Java.

The technique for writing a program using top-down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized sub-routines eventually will perform actions so simple they can be easily and concisely coded. When all the various sub-routines have been coded the program is done.

By defining how the application comes together at a high level, lower level work can be self-contained. By defining how the lower level abstractions are expected to integrate into higher level ones, interfaces become clearly defined.

Top-down approach

Practicing top-down programming has several advantages:

* Separating the low level work from the higher level abstractions leads to a modular design.
* Modular design means development can be self contained.
* Having "skeleton" code illustrates clearly how low level modules integrate.
* Fewer operations errors (to reduce errors, because each module has to be processed separately, so programmers get large amount of time for processing).
* Much less time consuming (each programmer is only involved in a part of the big project).
* Very optimized way of processing (each programmer has to apply their own knowledge and experience to their parts (modules), so the project will become an optimized one).
* Easy to maintain (if an error occurs in the output, it is easy to identify the errors generated from which module of the entire program).

Bottom-up approach

Pro/ENGINEER WF4.0 proetools.com - Lego Racer Pro/ENGINEER Parts is a good example of bottom-up design because the parts are first created and then assembled without regard to how the parts will work in the assembly.

In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, whereby the beginnings are small, but eventually grow in complexity and completeness.

Object-oriented programming (OOP) is a programming paradigm that uses "objects" to design applications and computer programs.

In Mechanical Engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building LEGO. Engineers call this piece part design.

This bottom-up approach has one weakness. We need to use a lot of intuition to decide the functionality that is to be provided by the module. If a system is to be built from existing system, this approach is more suitable as it starts from some existing modules.

Pro/ENGINEER (as well as other commercial Computer Aided Design (CAD) programs) does however hold the possibility to do Top-Down design by the use of so-called skeletons. They are generic structures that hold information on the overall layout of the product. Parts can inherit interfaces and parameters from this generic structure. Like parts, skeletons can be put into a hierarchy. Thus, it is possible to build the over all layout of a product before the parts are designed.

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]

Classifying non-regular bugs. (My opinion)

I usually classify non-regular bugs into 3 category.

(*1) : caused by timing. In multi-threaded software, sometimes unexpected execution order - ex. race condition - may raise error.
(*2) : caused by memory corruption. Memory is corrupted by some reasons - this is root cause, and because of this, error is occurred irregularly.
(*3) : caused by using uninitialized data.

Knowing that which category the bug belongs to, is very difficult and usually, only based on developer's experience.
Here one tip comes from my experience.

In case (*2) :
  reproducing way itself seems to be very irregular and the representation(result) of crash is also very un-determined.
In case (*3) :
  even if reproducing sequence is irregular too, the representation(result) tends to be simlar among cases. Usually, software is crashed from 'wrong memory reference'. And the overall outlook of crash is very regular.

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]
 

Followings are all my personal opinion!

We always tell "good code". And lots of constraints are suggested for so-called "good code". For example, "number of line in one function should be less than xx", "number of class members should be less than xx", and so on.

Let's think from the basic. What is "good code". We don't need messy conditions for this. Just one factor!.
I want to define "good code" as "code that is easy-to-understand". Codes that is easy-to-understand, are easy to modify too.

Let's talk about this more.
"code that is easy-to-undertand" means, "It should be easy to know that the codes for and to predict the execution result.". Then what we should do to acheive this?

"Making code be small" can be an option. Anyway, small codes are easy. But, without reducing features and functionalities, decreasing code size enough is impossible. So, in practice, the is not a solution.

Let's convert the sentence into "Making stuffs required to be understood, be small". How about this? We can use lots of reliable module/component to do this. For example, let's think about standard library. We don't try to understand inside of each standard library functions because, we already trust the library enough. So, after understand inputs and outputs, we just use it. If we should understand inside of all those library functions, it will be nightmare. Too many things to understand!

In this context, we can say OOP(Object Oriented Programming), component-based software engineering, modularity, and so on. Making reliable and reusable modules/components can reduce stuffs required to be understood. Knowing input and output is much simpler than understanding inside codes of it.

This is my definition about "good code"

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]

* Space vs. Speed
- Most famous one. (Memory space vs. execution speed) - details are omitted.

* Modularity vs. (Performance & Space)
- Well-designed-module should be stable. To be stable, there may be lots of 'parameter check', 'condition check', 'sanity check' and so on. Besides, for the safety reason, it may not trust passed memory space. So, it may allocates it's own memory space and try to copy data from passed memory to use it. Those make this trade-off. Well-designed-module hides it's information and stable as it is. But to be like this, it consumes speed and space.

* Abstraction vs. Debugging
- When software is stable, abstraction reduces maintenance costs very much. But, if not, debugging is more difficult than straightforward designed software, especially, to the engineer who are not accustomed to the software.

* low possibility to misuse vs. various and rich functionality
- It is clear...

* Backward-Compatibility vs. Innovation
- ...

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]
 

Output files should be separated from input files. Usually, 'clean' option is supported to delete auto-generated-output-files. But, separating outputs at directory level is better - for example, 'out' directory is root directory for output.
This is very basic and fundamental. But, this also easily ignored.

Supporting 'clean' option is good. But, cleaning output files those are scattered over several different places, is difficult to maintain. And, in this case, to find some necessary output files is also difficult.
On the environment that output files are separated at directory level, programmer can tell easily that which is input and which is output. This means, he/she can know which are necessary files to build software without further efforts - easy to read software.

[[ blog 이사 과정에서 정확한 posting날짜가 분실됨. 년도와 분기 정도는 맞지 않을까? ]]

Sometimes, so-called 'context' data structure is used to handling set of related information. And this context data tends to be required as parameter by various functions. Here is pros and cons.

Pros:
- It is easy to handle lots of related data - data allocation and free etc.
- We can save costs of parameter passing; Passing context is enough.

Cons:
- It is difficult to know which one is input and which one is output, because whole context is passed as parameter. This decrease readability very much.
- Usually, context is passed as parameter to lots of functions. So, just like 'global' data, it is very difficult to trace the value-change-history. That is, debugging/maintaining becomes difficult.

One good way to using context structure; Only very few modules can change/modify context data. And others only can read this one. Than, lot's of issues in "Cons" can be overcome.

+ Recent posts