Always strive to compile code using the highest warning level available for your compiler and eliminate warnings by modifying the code. This really helps avoid all those hours of debugging and frustration - at the end of which you discover that all the mistake you did was a silly function returning no value in some code path or forgot to add default case to switch statement!
In this article, I will present a whole list of options for g++ compiler. I have borrowed the detailed description of these options from gcc/g++ online manual. In case of any doubt, please consult relevant documentation on your system specific to your compiler version.
-D_GLIBCXX_DEBUG -g -Wall -Wextra -pedantic -Weffc++ -Wold-style-cast -Woverloaded-virtual -Wswitch-default -Wswitch-enum -Wmissing-noreturn -Wunreachable-code –Winline
Description:
-D_GLIBCXX_DEBUG :
This is the libstdc++ debug mode which replaces unsafe (but efficient) standard containers and iterators with semantically equivalent safe standard containers and iterators to aid in debugging user programs.
-g :
Produce debugging information in the operating system's native format (stabs, COFF, XCOFF, or DWARF). GDB can work with this debugging information.
On most systems that use stabs format, -g enables use of extra debugging information that only GDB can use; this extra information makes debugging work better in GDB but will probably make other debuggers crash or refuse to read the program. If you want to control for certain whether to generate the extra information, use -gstabs+, -gstabs, -gxcoff+, -gxcoff, or -gvms (see below).
Unlike most other C compilers, GCC allows you to use -g with -O. The shortcuts taken by optimized code may occasionally produce surprising results: some variables you declared may not exist at all; flow of control may briefly move where you did not expect it; some statements may not be executed because they compute constant results or their values were already at hand; some statements may execute in different places because they were moved out of loops.
Nevertheless it proves possible to debug optimized output. This makes it reasonable to use the optimizer for programs that might have bugs.
-Wall : Enable all warnings. Consult manual for more details.
-Wextra : Enable some extra warnings in addition to -Wall
-pedantic : Issue all the warnings demanded by strict ISO C and ISO C++
-Weffc++ :
Warn about violations of the following style guidelines from Scott Meyers' Effective C++ book:
* Item 11: Define a copy constructor and an assignment operator for classes with dynamically allocated memory.
* Item 12: Prefer initialization to assignment in constructors.
* Item 14: Make destructors virtual in base classes.
* Item 15: Have operator= return a reference to *this.
* Item 23: Don't try to return a reference when you must return an object.
Also warn about violations of the following style guidelines from Scott Meyers' More Effective C++ book:
* Item 6: Distinguish between prefix and postfix forms of increment and decrement operators.
* Item 7: Never overload &&, ||, or ,.
When selecting this option, be aware that the standard library headers do not obey all of these guidelines; use grep -v to filter out those warnings.
-Wold-style-cast :
Warn if an old-style (C-style) cast to a non-void type is used within a C++ program. The new-style casts (static_cast, reinterpret_cast, and const_cast) are less vulnerable to unintended effects and much easier to search for.
-Woverloaded-virtual: Warn when a function declaration hides virtual functions from a base class.
-Wswitch-default : Warn whenever a switch statement does not have a default case.
-Wswitch-enum : Warn whenever a switch statement has an index of enumeral type and lacks a case for one or more of the named codes of that enumeration. case labels outside the enumeration range also provoke warnings when this option is used.
-Wmissing-noreturn :
Warn about functions which might be candidates for attribute noreturn. Note these are only possible candidates, not absolute ones. Care should be taken to manually verify functions actually do not ever return before adding the noreturn attribute, otherwise subtle code generation bugs could be introduced. You will not get a warning for main in hosted C environments.
-Wunreachable-code :
Warn if the compiler detects that code will never be executed.
This option is intended to warn when the compiler detects that at least a whole line of source code will never be executed, because some condition is never satisfied or because it is after a procedure that never returns.
It is possible for this option to produce a warning even though there are circumstances under which part of the affected line can be executed, so care should be taken when removing apparently-unreachable code.
For instance, when a function is inlined, a warning may mean that the line is unreachable in only one inlined copy of the function.
–Winline : Warn if a function can not be inlined and it was declared as inline. Even with this option, the compiler will not warn about failures to inline functions declared in system headers
If you have any questions/suggestions do let me know by leaving a comment. Thanks. Enjoy your time with g++!
"It is not the Aptitude but Attitude that determines your Altitude!"
Saturday, December 25, 2010
Wednesday, November 10, 2010
CScope and CTags
CSCOPE:
Cscope can be a particularly useful tool if you need to wade into a large code base. You can save yourself a lot of time by being able to do fast, targeted searches rather than randomly grepping through the source files by hand (especially since grep starts to take a while with a truly large code base).
Steps:
1. Get the source. First get the source code.
2. Figure out where you want to put your Cscope database files.
3. Generate cscope.files with a list of files to be scanned.
find . -name '*.h' > cscope.files
find . -name '*.cpp' >> cscope.files
4. Generate the Cscope database.
cscope -b -q
The -b flag tells Cscope to just build the database, and not launch the Cscope GUI. The -q causes an additional, 'inverted index' file to be created, which makes searches run much faster for large databases.
5. Using the database
Append following line to your ~/.bashrc:
alias csd='cscope -d'
This tells Cscope not to regenerate the database. Otherwise you'll have to wait while Cscope checks for modified files, which can take a while for large projects, even when no files have changed. If you accidentally run 'cscope', without any flags, you will also cause the database to be recreated from scratch without the fast index or kernel modes being used, so you'll probably need to rerun your original cscope command above to correctly recreate the database.
Now, use command: csd
6. Regenerating the database when the source code changes.
If there are new files in your project, rerun your 'find' command to update cscope.files if you're using it.
Then simply invoke cscope the same way (and in the same directory) as you did to generate the database initially (i.e. cscope -b -q)
CTAGS:
Here are few useful tips. Refer to the manual page for more details using "man 1 ctags"
1. Build tags database:
ctags -R *
It creates a tags file
Create a tags file for Perl
ctags --languages=Perl -R
2. Add it to your editor (in ~/.vimrc file in my case):
set tags={Path to your view (since you might want to have separate tags file for each view)}/tags
3. Other options:
--exclude=[pattern]
Add pattern to a list of excluded files and directories. This is used to avoid creating tags on specified files or files under directories.
Cscope can be a particularly useful tool if you need to wade into a large code base. You can save yourself a lot of time by being able to do fast, targeted searches rather than randomly grepping through the source files by hand (especially since grep starts to take a while with a truly large code base).
Steps:
1. Get the source. First get the source code.
2. Figure out where you want to put your Cscope database files.
3. Generate cscope.files with a list of files to be scanned.
find . -name '*.h' > cscope.files
find . -name '*.cpp' >> cscope.files
4. Generate the Cscope database.
cscope -b -q
The -b flag tells Cscope to just build the database, and not launch the Cscope GUI. The -q causes an additional, 'inverted index' file to be created, which makes searches run much faster for large databases.
5. Using the database
Append following line to your ~/.bashrc:
alias csd='cscope -d'
This tells Cscope not to regenerate the database. Otherwise you'll have to wait while Cscope checks for modified files, which can take a while for large projects, even when no files have changed. If you accidentally run 'cscope', without any flags, you will also cause the database to be recreated from scratch without the fast index or kernel modes being used, so you'll probably need to rerun your original cscope command above to correctly recreate the database.
Now, use command: csd
6. Regenerating the database when the source code changes.
If there are new files in your project, rerun your 'find' command to update cscope.files if you're using it.
Then simply invoke cscope the same way (and in the same directory) as you did to generate the database initially (i.e. cscope -b -q)
CTAGS:
Here are few useful tips. Refer to the manual page for more details using "man 1 ctags"
1. Build tags database:
ctags -R *
It creates a tags file
Create a tags file for Perl
ctags --languages=Perl -R
2. Add it to your editor (in ~/.vimrc file in my case):
set tags={Path to your view (since you might want to have separate tags file for each view)}
Friday, October 22, 2010
Goal Setting
There is a difference between dream and goal.
Benefits of setting an achievable goal:
1. Guiding decisions
2. Monitor progress
3. Communicating growth
Goal
/ \
Professional Personal
Types of Goals
- Performance – Raise your aim and take advantage of current abilities
- Development – Expand abilities
Setting Goals
Objective component – eg: conduct interviews, submit a plan, write a report
Standards component – measures whether an objective has been met. eg: within 6 weeks, by 30%, less than 5 times
Conditions component – clarifies the objective. Limitation on how to achieve the goal
Strategic Thinking
1. Win collaboration
2. Assess the risk
3. Reduce wasted effort – Productivity
Assessing Risk
Risk -> Time/Effort
- Classify goals in low risk and high risk
Type of change associated with high risk goals
- Create a new condition
- Eliminate an existing condition
Type of change associated with low risk goals
- Preserve an existing condition
- Avoid an unwanted condition
Beware of unstated goals
Collaborating on Goals
- Define the conflict
- Propose a collaboration
- Define roles for participants
Prioritizing Goals
Advantages:
- You achieve your goals more quickly
- You take action on imp and urgent goals first
- you are better able to recognize when it’s time to let or choose an alternative goal.
- Personal importance
- Professional importance
- Resource availability
- Resource urgency
Plot a graph of importance vs availability
Setting Alternative Goals
Strategies:
- Breaking out smaller objectives
- Reassessing priorities
- Seeking a different path to dest
Don’ts:
- Relaxing standards
- Extend deadline
Thursday, October 21, 2010
Static Code Analysis
Static code analysis is the process of examining and evaluating software without actually executing the code. Analyzing software when executing software is known as dynamic analysis. Static code analysis is all about moving the detection of critical security and quality problems upstream, ensuring they’re identified and fixed early in the development process.
This approach yields significant productivity gains across the entire process and leads to cleaner, more stable builds, more efficient testing, and of course, a higher quality product. Besides helping us find bugs that we’ve missed in unit testing, static code analysis has made all our engineers aware of security issues and helped us teach junior staff better coding techniques.
What’s Involved?
Static source code analysis tools are almost entirely automated. They’re like compilers, but instead of generating machine-executable code, they simply find bugs and issue warnings about security vulnerabilities, logic errors, implementation defects, concurrency violations, boundary conditions, and other glitches in the code. The tools provide a list of problems, each tied to a specific location in the source code. Detailed context is usually provided to explain how the tool arrived at the conclusion.
Static analysis tools use very sophisticated process flow and data flow analysis. The quality and security issues they identify are often complex and involve obscure logic problems, which is why these tools can be so valuable.
Static source code analysis tools analyze 100% of the source code, far more than any external test tools. For organizations that must comply with the Payment Card Industry Data Security Standard (PCI DSS) or Payment Application Data Security Standard (PCI PADSS), these tools fulfill code review requirement. They also produce valuable metrics, including kilo-lines of code (KLoCs), file counts, and “churn” — that is, the number of files that have changed between two regular builds.
Introducing static code analysis and the requisite tools into the development process isn’t always painless, however. For instance, static code analysis tools usually require careful integration into the project build process. For large software products, these builds are often somewhat of a black art, involving the use of Make and Ant. There are many options and dependencies. All static code analysis tools offer powerful utilities to analyze the build process and insert themselves into the right places, but some manual tuning is usually required.
These tools also must be integrated into developers’ daily work. Again, tool makers offer both command-line versions of the tools as well as plugins for many of the popular integrated development environments such as Eclipse and Visual Studio.
The tools require that the code base have a subject matter expert (SME) who can also provide the same service for the tools. That person will answer questions not just about how the tool operates but also about the issues that the tool is finding — including identifying when the tool is generating a false positive. The SME will provide training and support to other developers, a fairly heavy workload for the first few weeks, until everyone is familiar with the static analysis tool. After that, that part of the workload should settle down to several hours a week.
Initial Analysis
The biggest challenges with static code analysis tools are problems in existing code. There’s an old programmer’s joke that says “God made the world in six days because he had no installed base.” This is certainly not the case for most businesses, which often have millions of lines of code.
The first time an existing codebase is analyzed, tens of thousands of issues will be found. Don’t panic. Remember, these issues have been there for awhile, and the software continues to function and provide users with what they need. At ACI Worldwide, all the issues from an initial build on existing code are immediately deferred and hidden from sight. That way developers don’t get overwhelmed and can stay focused on ensuring that new problems aren’t introduced into the code. At some point in the future, product planners and the senior development staff review the deferred issues, prioritize and group them, and decide when remediation can be factored into the planning for a future release. There’s no perfect approach, and businesses must always make hard decisions
about whether to counter a vulnerability or assume the risk.
Tips For Success
• Define an initial issue policy. You may decide to only deal with the most severe issues for the first project cycle.
• Get the global mechanics working. Many of the tools require license managers and centralized result servers.
• Attack one product at a time. Get it working with one group and then move on to the next.
• Identify SMEs. Every product needs at least one subject matter expert. Large products that are broken into major components will naturally need an SME for each one.
• Train SMEs. Make them designated experts.
• Work with SMEs. Help them to do build and tool integration for their product or component.
• Train developers. The SME should guide how the tool is integrated into the team’s development process.
• Perform initial analysis on existing code and defer all issues. Don’t discuss the large quantity of issues with the developers. If any ask, explain to them that they’ve been set aside and will be considered in a future product cycle.
• Deliver help from SMEs to developers as required. During the first days of the roll-out, the SME should monitor the developers’ work. Developers should be analyzing the code often, at least before they submit a completed unit of work into the product build. Just as a developer wouldn’t check in a unit of code that doesn’t compile, they won’t want to check in a unit that still has static code analysis issues.
• Run the build analysis often. If the developers are doing their job and addressing issues as they come up, then no issues should be found at this stage.
• Review deferred issues. After the process is running smoothly and the tool is a routine part of work, review deferred issues and plan whatever remediation is needed for future releases.
The Right Tool For You
There are numerous open source and commercially available static code analysis tools on the market. When choosing one, the place to start is with language support. Some tools support a single language. Other static code analysis tools support multiple languages.
Final Analysis
Overall, static code analysis has proven to be a valuable tool. For a reasonable cost per developer, we can find serious bugs more comprehensively and earlier in the development process.
The tools include extensive help files that refer developers having difficulty with an issue to a more experienced developer to get advice — always a valuable interaction.
Bottom line: Static code analysis tools help incorporate security and quality awareness into the fabric of the entire development organization. Finding bugs earlier and avoiding security breaches
is invaluable to any software development effort.
5 Queries for Choosing the Right Code Analysis Tool
1. Do you need a static or dynamic analysis tool?
2. What languages and platforms does it support?
3. How flexible is the reporting component?
4. How easy is it to add or update rules?
5. Does it integrate with your IDE?
This approach yields significant productivity gains across the entire process and leads to cleaner, more stable builds, more efficient testing, and of course, a higher quality product. Besides helping us find bugs that we’ve missed in unit testing, static code analysis has made all our engineers aware of security issues and helped us teach junior staff better coding techniques.
What’s Involved?
Static source code analysis tools are almost entirely automated. They’re like compilers, but instead of generating machine-executable code, they simply find bugs and issue warnings about security vulnerabilities, logic errors, implementation defects, concurrency violations, boundary conditions, and other glitches in the code. The tools provide a list of problems, each tied to a specific location in the source code. Detailed context is usually provided to explain how the tool arrived at the conclusion.
Static analysis tools use very sophisticated process flow and data flow analysis. The quality and security issues they identify are often complex and involve obscure logic problems, which is why these tools can be so valuable.
Static source code analysis tools analyze 100% of the source code, far more than any external test tools. For organizations that must comply with the Payment Card Industry Data Security Standard (PCI DSS) or Payment Application Data Security Standard (PCI PADSS), these tools fulfill code review requirement. They also produce valuable metrics, including kilo-lines of code (KLoCs), file counts, and “churn” — that is, the number of files that have changed between two regular builds.
Introducing static code analysis and the requisite tools into the development process isn’t always painless, however. For instance, static code analysis tools usually require careful integration into the project build process. For large software products, these builds are often somewhat of a black art, involving the use of Make and Ant. There are many options and dependencies. All static code analysis tools offer powerful utilities to analyze the build process and insert themselves into the right places, but some manual tuning is usually required.
These tools also must be integrated into developers’ daily work. Again, tool makers offer both command-line versions of the tools as well as plugins for many of the popular integrated development environments such as Eclipse and Visual Studio.
The tools require that the code base have a subject matter expert (SME) who can also provide the same service for the tools. That person will answer questions not just about how the tool operates but also about the issues that the tool is finding — including identifying when the tool is generating a false positive. The SME will provide training and support to other developers, a fairly heavy workload for the first few weeks, until everyone is familiar with the static analysis tool. After that, that part of the workload should settle down to several hours a week.
Initial Analysis
The biggest challenges with static code analysis tools are problems in existing code. There’s an old programmer’s joke that says “God made the world in six days because he had no installed base.” This is certainly not the case for most businesses, which often have millions of lines of code.
The first time an existing codebase is analyzed, tens of thousands of issues will be found. Don’t panic. Remember, these issues have been there for awhile, and the software continues to function and provide users with what they need. At ACI Worldwide, all the issues from an initial build on existing code are immediately deferred and hidden from sight. That way developers don’t get overwhelmed and can stay focused on ensuring that new problems aren’t introduced into the code. At some point in the future, product planners and the senior development staff review the deferred issues, prioritize and group them, and decide when remediation can be factored into the planning for a future release. There’s no perfect approach, and businesses must always make hard decisions
about whether to counter a vulnerability or assume the risk.
Tips For Success
• Define an initial issue policy. You may decide to only deal with the most severe issues for the first project cycle.
• Get the global mechanics working. Many of the tools require license managers and centralized result servers.
• Attack one product at a time. Get it working with one group and then move on to the next.
• Identify SMEs. Every product needs at least one subject matter expert. Large products that are broken into major components will naturally need an SME for each one.
• Train SMEs. Make them designated experts.
• Work with SMEs. Help them to do build and tool integration for their product or component.
• Train developers. The SME should guide how the tool is integrated into the team’s development process.
• Perform initial analysis on existing code and defer all issues. Don’t discuss the large quantity of issues with the developers. If any ask, explain to them that they’ve been set aside and will be considered in a future product cycle.
• Deliver help from SMEs to developers as required. During the first days of the roll-out, the SME should monitor the developers’ work. Developers should be analyzing the code often, at least before they submit a completed unit of work into the product build. Just as a developer wouldn’t check in a unit of code that doesn’t compile, they won’t want to check in a unit that still has static code analysis issues.
• Run the build analysis often. If the developers are doing their job and addressing issues as they come up, then no issues should be found at this stage.
• Review deferred issues. After the process is running smoothly and the tool is a routine part of work, review deferred issues and plan whatever remediation is needed for future releases.
The Right Tool For You
There are numerous open source and commercially available static code analysis tools on the market. When choosing one, the place to start is with language support. Some tools support a single language. Other static code analysis tools support multiple languages.
Final Analysis
Overall, static code analysis has proven to be a valuable tool. For a reasonable cost per developer, we can find serious bugs more comprehensively and earlier in the development process.
The tools include extensive help files that refer developers having difficulty with an issue to a more experienced developer to get advice — always a valuable interaction.
Bottom line: Static code analysis tools help incorporate security and quality awareness into the fabric of the entire development organization. Finding bugs earlier and avoiding security breaches
is invaluable to any software development effort.
5 Queries for Choosing the Right Code Analysis Tool
1. Do you need a static or dynamic analysis tool?
2. What languages and platforms does it support?
3. How flexible is the reporting component?
4. How easy is it to add or update rules?
5. Does it integrate with your IDE?
Time Management
Areas:
A. Environment
B. Technology
C. Time stealers
A. Controlling Environment:
1. Paperwork
2. Physical organization
3. Meeting
A Technique for Managing Paperwork
Pass on – to be read by someone else, pass on to only one person avoid passing on multiple copies.Read – read short docs immediately, long docs later
File – needed in future
Throw away – irrelevant doc
Physical organization
Comfort,
Structure,
Tidiness
Preparing to save time
- Ask the right questions – necessity, contribution, action
B. Time and technology:
Benefits:
- Communicate info very quickly over any distance.
- Enables you to store and retrieve info extremely easily
Controlling emails
Strategy (In order):
- Allocate specific time for addressing emails.
- Minimize the no. of emails to be read
- Prioritize actions as a result of email.
- Minimize the time that each necessary reply requires.
- Deactivate desktop alerts
Electronic organization systems
- PC based system – large amt of data that doesn’t need to be shared
- telephone based – small amount of data which is very portable
- networked - very large amount of data, shared access
C. Time stealers
Dealing with demands
- Inner directed – minimizes time given to other person
- Other directed – Gives much time to other person
- Autonomous – focuses on own goals and that of other person simultaneously.
Avoiding reverse delegation:
- Set boundaries
- Offer information
- Refuse extra work.
Beating Procrastination
- Results into fatigue and waste of time
Excuses:
- I have lack of info – get relevant info from concerned ppl ASAP
- I have plenty of time – identify exact amt of time required for the task and schedule each action
- I don’t have any time – reprioritize
Underlying reasons:
- anxious
- low motivation
How to beat?
- confront excuses
- break the habit
- identify the outcome
- take the first step
- learn from the past
1. Don’t either welcome or refuse an interruption
2. Aim to use your time as well as possible
3. Control what happens when you are interrupted.
- Allocate time – Be specific on amt of time. Say : “I have X mins”
- Control content
- Control end – Say: “Unfortunately, I have another commitment now”
- Learn to say NO
Tuesday, October 05, 2010
GDB Essential commands
Command | Abbr | Description |
set args | set command args. Also can do: gdb --args command arg1 ... | |
break | b | set breakpoint (at function, line number, ...) |
run | r | (re)start execution |
continue | c | Continue execution |
step | s | Next line |
next | n | next line without recursing into functions |
finish | fin | next line after this function returns |
list | l | show source (for line, function, offset, ...) |
backtrace | bt | Show the stack of functions. Add "full" to include local variables |
up, down, frame | up, down, f | Move between current stack frames |
watch | wa | break when variable changes value |
display | disp | display expression each time program stops |
info locals | i loc | display local variables |
info threads | i thr | Display all threads |
thread | thr | Switch to thread # |
info breakpoints | i b | Display all breakpoints |
Delete, enable, disable | d, en, dis | Delete, enable, disable breakpoint |
help | h | display online help |
focus next | fs n | switch window (allows cursor keys in CMD window for e.g.) |
Ctrl-x a | Display code in another window | |
Ctrl-L | redraw the display (if program outputs for example) | |
print | p | Print value of expression |
set variable | set v | Evaluate expression EXP and assign result to variable VAR |
x/FMT | x/FMT | Examine memory |
Sample .gdbinit file:
# Set verbose printing of informational messages.
set verbose on
# Set printing of addresses
set print address on
# Set printing of object's derived type based on vtable info
set print object on
set print sym on
# Set prettyprinting of structures
#set print pretty off
# Set printing of C++ static members
set print static-members on
# Set demangling of encoded C++/ObjC names when displaying symbols
set print demangle on
# Unset printing of 8-bit characters in strings as \nnn
set print sevenbit-strings off
# Set prettyprinting of arrays
set print array on
# Set printing of array indexes
set print array-indexes
# Set printing of char arrays to stop at first null char
set print null-stop on
# Set printing of unions interior to structures
set print union on
# Set printing of C++ virtual function tables
set print vtbl on
# Set saving of the history record on exit
set history save on
# Set history expansion on command input
set history expansion on
# Set gdb's prompt
set prompt (onkar)
handle SIGCONT nostop
#### OTHER OPTIONAL SETTINGS ####
# Set a limit on how many elements of an array GDB will print. If GDB is printing a large array, it stops printing after it has printed the number of elements
# set by the set print elements command. This limit also applies to the display of strings. When GDB starts, this limit is set to 200. Setting number-of-elements
# to zero means that the printing is unlimited.
#set print elements number-of-elements
#source ~/stl-views-1.0.3.gdb
#set history filename # TODO: enable this if reqd. Set the filename in which to record the command history
#catch throw
Useful commands:
Conditional breakpoint:
break main.cc:100 if i == 10
Repetitive commands:
b main()
(gdb) command 1
Type commands for when breakpoint 1 is hit, one per line.
End with a line saying just "end".
>print i
>print j
>print k
>end
The directory command and setting source directory:
(gdb)
directory ~/src/somepackage/src
Source directories searched: /home/nelhage/src/coreutils-7.4:$cdir:$cwd
This requests gdb to search for source files in the given dir in addition to
the existing directories.
Tuesday, September 28, 2010
GNU Autotools - Autoconf, Automake tutorial
The GNU build system, also known as the Autotools, is a suite of programming tools designed to assist in making source-code packages portable to many Unix-like systems.
Here is a sample step by step example to build a standard GNU project from scratch:
Done!
Here is a sample step by step example to build a standard GNU project from scratch:
1. README contains some very limited documentation for our little package.
[onkar@gnutools queue]$ cat README
This is a demonstration package for GNU Automake.
Type `info Automake' to read the Automake manual.
2. Makefile.am and src/Makefile.am contain Automake instructions for these two directories.
[onkar@gnutools queue]$ cat src/Makefile.am
bin_PROGRAMS = queue
queue_SOURCES = main.cpp Queue.cpp
noinst_HEADERS = Queue.h
[onkar@gnutools queue]$ cat Makefile.am
SUBDIRS = src
docdir = ${datadir}/doc/${PACKAGE}
dist_doc_DATA = README
3. configure.ac contains Autoconf instructions to create the configure script.
[onkar@gnutools queue]$ cat configure.ac
AC_INIT([queue], [1.0], [bug-automake@gnu.org])
AC_PROG_CXX
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
AC_PROG_CC
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES([
Makefile
src/Makefile
])
AC_OUTPUT
4. autoreconf: Once you have these five files, it is time to run the Autotools to instantiate the build system. Do this using the autoreconf command as follows:
[onkar@gnutools queue]$ autoreconf --install
5. configure: You can see that autoreconf created four other files: configure, config.h.in, Makefile.in, and src/Makefile.in. The latter three files are templates that will be adapted to the system by configure under the names config.h, Makefile, and src/Makefile. Let's do this:
[onkar@gnutools queue]$ ./configure
checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking dependency style of g++... gcc3
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking dependency style of gcc... gcc3
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating config.h
config.status: config.h is unchanged
config.status: executing depfiles commands
6. make, make clean, make distcheck: You can see Makefile, src/Makefile, and config.h being created at the end after configure has probed the system. It is now possible to run all the targets we wish. For instance:
[onkar@gnutools queue]$ make
make all-recursive
make[1]: Entering directory `/home/onkar/practice/cc/queue'
Making all in src
make[2]: Entering directory `/home/onkar/practice/cc/queue/src'
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -MT main.o -MD -MP -MF ".deps/main.Tpo" -c -o main.o main.cpp; \
then mv -f ".deps/main.Tpo" ".deps/main.Po"; else rm -f ".deps/main.Tpo"; exit 1; fi
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -MT Queue.o -MD -MP -MF ".deps/Queue.Tpo" -c -o Queue.o Queue.cpp; \
then mv -f ".deps/Queue.Tpo" ".deps/Queue.Po"; else rm -f ".deps/Queue.Tpo"; exit 1; fi
g++ -g -O2 -o queue main.o Queue.o
make[2]: Leaving directory `/home/onkar/practice/cc/queue/src'
make[2]: Entering directory `/home/onkar/practice/cc/queue'
make[2]: Leaving directory `/home/onkar/practice/cc/queue'
make[1]: Leaving directory `/home/onkar/practice/cc/queue'
[onkar@gnutools queue]$ make distcheck
..................
===========================================
queue-1.0 archives ready for distribution:
queue-1.0.tar.gz
===========================================
[onkar@gnutools queue]$ make clean
Making clean in src
make[1]: Entering directory `/home/onkar/practice/cc/queue/src'
test -z "queue" || rm -f queue
rm -f *.o
make[1]: Leaving directory `/home/onkar/practice/cc/queue/src'
Making clean in .
make[1]: Entering directory `/home/onkar/practice/cc/queue'
make[1]: Nothing to be done for `clean-am'.
make[1]: Leaving directory `/home/onkar/practice/cc/queue'
[onkar@gnutools queue]$ cat README
This is a demonstration package for GNU Automake.
Type `info Automake' to read the Automake manual.
2. Makefile.am and src/Makefile.am contain Automake instructions for these two directories.
[onkar@gnutools queue]$ cat src/Makefile.am
bin_PROGRAMS = queue
queue_SOURCES = main.cpp Queue.cpp
noinst_HEADERS = Queue.h
[onkar@gnutools queue]$ cat Makefile.am
SUBDIRS = src
docdir = ${datadir}/doc/${PACKAGE}
dist_doc_DATA = README
3. configure.ac contains Autoconf instructions to create the configure script.
[onkar@gnutools queue]$ cat configure.ac
AC_INIT([queue], [1.0], [bug-automake@gnu.org])
AC_PROG_CXX
AM_INIT_AUTOMAKE([-Wall -Werror foreign])
AC_PROG_CC
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES([
Makefile
src/Makefile
])
AC_OUTPUT
4. autoreconf: Once you have these five files, it is time to run the Autotools to instantiate the build system. Do this using the autoreconf command as follows:
[onkar@gnutools queue]$ autoreconf --install
5. configure: You can see that autoreconf created four other files: configure, config.h.in, Makefile.in, and src/Makefile.in. The latter three files are templates that will be adapted to the system by configure under the names config.h, Makefile, and src/Makefile. Let's do this:
[onkar@gnutools queue]$ ./configure
checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking dependency style of g++... gcc3
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking dependency style of gcc... gcc3
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating config.h
config.status: config.h is unchanged
config.status: executing depfiles commands
6. make, make clean, make distcheck: You can see Makefile, src/Makefile, and config.h being created at the end after configure has probed the system. It is now possible to run all the targets we wish. For instance:
[onkar@gnutools queue]$ make
make all-recursive
make[1]: Entering directory `/home/onkar/practice/cc/queue'
Making all in src
make[2]: Entering directory `/home/onkar/practice/cc/queue/src'
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -MT main.o -MD -MP -MF ".deps/main.Tpo" -c -o main.o main.cpp; \
then mv -f ".deps/main.Tpo" ".deps/main.Po"; else rm -f ".deps/main.Tpo"; exit 1; fi
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -MT Queue.o -MD -MP -MF ".deps/Queue.Tpo" -c -o Queue.o Queue.cpp; \
then mv -f ".deps/Queue.Tpo" ".deps/Queue.Po"; else rm -f ".deps/Queue.Tpo"; exit 1; fi
g++ -g -O2 -o queue main.o Queue.o
make[2]: Leaving directory `/home/onkar/practice/cc/queue/src'
make[2]: Entering directory `/home/onkar/practice/cc/queue'
make[2]: Leaving directory `/home/onkar/practice/cc/queue'
make[1]: Leaving directory `/home/onkar/practice/cc/queue'
[onkar@gnutools queue]$ make distcheck
..................
===========================================
queue-1.0 archives ready for distribution:
queue-1.0.tar.gz
===========================================
[onkar@gnutools queue]$ make clean
Making clean in src
make[1]: Entering directory `/home/onkar/practice/cc/queue/src'
test -z "queue" || rm -f queue
rm -f *.o
make[1]: Leaving directory `/home/onkar/practice/cc/queue/src'
Making clean in .
make[1]: Entering directory `/home/onkar/practice/cc/queue'
make[1]: Nothing to be done for `clean-am'.
make[1]: Leaving directory `/home/onkar/practice/cc/queue'
Done!
Monday, May 31, 2010
Firefox: Essential Add Ons
WOT – We all know about the threats which surfers suffer like viruses, spywares, adware, malicious spam, phishing etc., but here Web of Trust comes to rescue. WOT warns you about risky websites that try to scam surfers before they enter in them using a safety rating of 21 million websites, WOT combines evidence collected from multiple sources. Not only does it help surfers but also sets a protection level for children by blocking inappropriate content.
Ghostery – It keeps an eye on the websites that are keeping an eye on you i.e. it finds out that which web sites which are tracking you and would alert you about the same.
Interclue - Ever wanted to know what was behind the link before you clicked? Interclue tells you everything you need to know before you open yet another tab.
Colorful Tabs – Colors every tab in a different color and makes them easy to distinguish while beautifying the overall appeal of the interface.
Separe – Helps you keeping tabs tidy by introducing a new kind of tab.
Permatabs – turn tabs of your choice into permanent tabs that can’t be closed, and stick around between sessions.
Flashblock – blocks all Flash content from loading on a webpage.
Adblock Plus – is an enhanced version of Adblock. Block ads, applets, flash, embedded-media etc.
Download Youtube Videos+ - Video and audio download toolbar for tube sites and flv movies
Google Toolbar for Firefox
Fastest Search: Text search on page/in all tabs. Contains many features such as search Count/regex/visualize & list result/find-as-you-type.
Ghostery – It keeps an eye on the websites that are keeping an eye on you i.e. it finds out that which web sites which are tracking you and would alert you about the same.
Interclue - Ever wanted to know what was behind the link before you clicked? Interclue tells you everything you need to know before you open yet another tab.
Colorful Tabs – Colors every tab in a different color and makes them easy to distinguish while beautifying the overall appeal of the interface.
Separe – Helps you keeping tabs tidy by introducing a new kind of tab.
Permatabs – turn tabs of your choice into permanent tabs that can’t be closed, and stick around between sessions.
Flashblock – blocks all Flash content from loading on a webpage.
Adblock Plus – is an enhanced version of Adblock. Block ads, applets, flash, embedded-media etc.
Download Youtube Videos+ - Video and audio download toolbar for tube sites and flv movies
Google Toolbar for Firefox
Fastest Search: Text search on page/in all tabs. Contains many features such as search Count/regex/visualize & list result/find-as-you-type.
Friday, May 28, 2010
Basics of Delegation
1. Some Benefits:
- Change the management philosophy.
- Enhance management style
- Enhance your productivity
- Reduce your workload
- Increased availability
2. How delegation impacts you?
- Delegation is vital for effective management
- Reduce workload
Delegate non-critical work like:
Researching
Data entry
Clerical duties
Organizing docs
- Alleviate time constraints
3. How delegation impacts your employees?
1. Increase motivation
2. Challenge your employees
3. Results in positive career development.
4. How delegation impacts your organization?
- Positive work atmosphere
- Efficient employees, high productivity, low turn over rates
- Retain skilled employees
5. Preparing you to delegate:
Benefits:
1. More confidence in your delegating ability
2. Quicker task completion
3. More time to focus on managerial tasks
6. Why managers hesitate to delegate?
- Managers want to be viewed as superior workers
- They may overextend themselves and get exhausted
- Employees may resent
- They may become inferior workers
- Perfectionist tendencies
- If you want something done right, do it yourself
Solutions:
1. Prioritize tasks and realize that perfection is not always attainable or necessary.
2. Redirect energy by turning mistakes/errors into positive learning experiences
3. Hone delegation skills.
- Fear of being replaced
7. Delegating style:
- Controlling => limited experience, substantial managerial input needed => limits responsibility, decreased stress and motivation levels
- Coaching => close supervision, moderately experienced employee => more responsibility
- Consulting => previous experience with similar tasks, manager is available for any questions, etc.
- Coordinating => Full responsibility to assignee, minimal feedback => results in a highly motivated workforce
8. Develop your delegating attitude:
- Confidence in delegating abilities
- willingness to take risks
- trust
- task oriented
- patience
9. Delegation Skills:
- Technical
- Mid to lower level employees
- interpersonal
- conceptual
10. Attributes of a delegated task: (SMART)
- Pertinent to employee’s job description
- Possible to achieve
- Measurable
- As detailed as possible
11. Deciding What Tasks to Delegate
- Easily completed
- Suited to employee’s skills
- Challenging
1. Mental
2. Physical
3. Interpersonal
Presentation Skills
1. COMPONENTS OF PRESENTATION:
1. skills of presenter
2. audience
3. venue
4. message
2. TYPES OF PRESENTATION:
1) motivate - acknowledge audience's -ve feelings and communicate vision of the future in positive way
2) inform - sequence information in logical order and ask questions to check audience's understanding
3) persuade - sell the benefits, support them with facts, appreciate audience's point of view
4) discussion - present set of options, listen to audience's views, answer audience's questions
5) entertain - amusing anecdotes, avoid telling jokes
3. WHAT MAKES A SUCCESSFUL PRESENTER?
1) behave confidently but not arrogant - eye contact, normal voice, natural gestures/movements
2) interact with the audience but not too much - ask and encourage to ask questions
3) demonstrate physical skills of presenting - know your equipments
4. AUDIENCE CONCENTRATION
External factors:
1) Size of venue and audience
2) How well equipped the venue is
3) The time of day
5. PREPARING FOR PRESENTATION:
i) objective
write what is the expected outcome of your presentation?
ii) select presentation content
- choose quickly which helps ideas to flow and saves time
- identify main points and then connected sub-points - SINGLE WORDS, NOT SENTENCES
Options:
i) brainstorm - preferred due to flexibility and imaginative approach
ii) linear list
iii) organize presentation content
arrange in logical order. eg: chronological order
group your main points.
choose powerful sub-points
6. ORGANIZING YOUR PRESENTATION NOTES:
usually main headings with sub point bullets - complicated or new presentation
sometimes main headings alone - simple content or already given the presentation before
occasionally read presentation as a speech - eg: in case of legal liability
7. ANXIETY
Rational Reasons for anxiety
- less experience
- on your own
- pressure - imp. presentation
Irrational Reasons for anxiety
- They won't like me
- Run out of material
- Completely forget what I want to say
Controlling anxiety
- hours before presentation - do normal work, relax, exercise
- be calm just before the presentation - sit/stand comfortably, get rid of irrational thoughts, control your breathing
8. REHEARSAL
rehearse straight after u have finished preparing your presentation
rehearse sevaral times
rehearse in a condition as realistic as possible
Full rehearsal
deliver in the correct amt of time
Include your slides and other equipment
Use a space similar to the size of your actual venue
record the presentation
rehearse in front of audience
Partial rehearsal
Focus on difficult parts - start and end of presentation, introduction to a section
Practice movements and gestures
9. PRESENTATION ENVIRONMENT
seating arrangement - horse shoe/classroom style
equipment position - easy to access
shape and size of the room - comfortable for audience, be able to see the audience
come at the venue 1 hr before
Wednesday, May 12, 2010
Tips on Google Search
Google has become very much an indispensable tool for carrying out many of our day to day tasks. But, chances are, unless you are a technology geek, you probably still use Google in its simplest form. If your current use of Google is limited to typing a few words in and changing your query until you find what you’re looking for, then here’s a better way – and it’s not hard to learn.
- Explicit Phrase:
Lets say you are looking for content about internet marketing. Instead of just typing Shell scripting tutorial into the Google search box, you will likely be better off searching explicitly for the phrase. To do this, simply enclose the search phrase within double quotes.
Example: "Shell scripting tutorial"
- Exclude Words:
Lets say you want to search for content about internet marketing, but you want to exclude any results that contain the term advertising. To do this, simply use the "-" sign in front of the word you want to exclude.
Example Search: internet marketing -advertising
- Site Specific Search:
Often, you want to search a specific website for content that matches a certain phrase. Even if the site doesn’t support a built-in search feature, you can use Google to search the site for your term. Simply use the "site:somesite.com" modifier.
Example: "policy document" site:indiapost.gov.in
- Similar Words and Synonyms:
Let’s say you are want to include a word in your search, but want to include results that contain similar words or synonyms. To do this, use the "~" in front of the word.
Example: "internet marketing" ~professional
- Specific Document Types:
If you’re looking to find results that are of a specific type, you can use the modifier "filetype:". For example, you might want to find only PowerPoint presentations related to internet marketing.
Example: "internet marketing" filetype:ppt
- This OR That:
By default, when you do a search, Google will include all the terms specified in the search. If you are looking for any one of one or more terms to match, then you can use the OR operator. (Note: The OR has to be capitalized).
Example: internet marketing OR advertising
- Phone Listing:
Let’s say someone calls you on your mobile number and you don’t know how it is. If all you have is a phone number, you can look it up on Google using the phonebook feature.
Example: phonebook:617-555-1212 (note: the provided number does not work – you’ll have to use a real number to get any results).
- Area Code Lookup:
If all you need to do is to look-up the area code for a phone number, just enter the area code and Google will tell you where it’s from.
Example: 411033
- Numeric Ranges:
This is a rarely used, but highly useful tip. Let’s say you want to find results that contain any of a range of numbers. You can do this by using the X..Y modifier (in case this is hard to read, what’s between the X and Y are two periods. This type of search is useful for years (as shown below), prices or anywhere where you want to provide a series of numbers.
Example: president 1940..1950
- Stock (Ticker Symbol):
Just enter a valid ticker symbol as your search term and Google will give you the current financials and a quick thumb-nail chart for the stock.
Example: GOOG
- Calculator:
The next time you need to do a quick calculation, instead of bringing up the Calculator applet, you can just type your expression in to Google.
Example: 48512 * 1.02
- Word Definitions:
If you need to quickly look up the definition of a word or phrase, simply use the "define:" command.
Example: define:magnificent
Monday, May 03, 2010
Agile software development
Agile software development contains specific tools and techniques such as continuous integration, automated or xUnit test, pair programming, test driven development, design patterns, domain-driven design, code re-factoring and other techniques which are often used to improve quality and enhance project agility.
I'll cover some of the most important agile methodologies that I found very useful in executing projects.
I'll cover some of the most important agile methodologies that I found very useful in executing projects.
1. Test-driven development (TDD):
It is a software development technique that relies on the repetition of a very short development cycle: First the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards.
Benefits
1. Using TDD meant writing more tests and, in turn, programmers that wrote more tests tended to be more productive.
2. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging
3. Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program.
4. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially.
5. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project.
6. TDD can lead to more modularized, flexible, and extensible code
7. TDD can lead to more modularized, flexible, and extensible code. See: Mock Object design pattern
8. Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path.
Criticisms
1. The tests themselves become part of the maintenance overhead of a project.
2. The high number of passing unit tests may bring a false sense of security
3. Unexpected gaps in test coverage may exist or occur.
2. Mock Objects:
Mock objects allow you to set up predictable behavior to help you test your production code by emulating some functionality your code depends on. This might for example be a huge database which is too difficult and time consuming to maintain just for testing purposes.
Referances:
http://mockpp.sourceforge.net/ - a platform independent generic unit testing framework for C++
http://code.google.com/p/googlemock/
3. Pair programming:
Benefits
1. Using TDD meant writing more tests and, in turn, programmers that wrote more tests tended to be more productive.
2. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging
3. Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program.
4. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially.
5. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project.
6. TDD can lead to more modularized, flexible, and extensible code
7. TDD can lead to more modularized, flexible, and extensible code. See: Mock Object design pattern
8. Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path.
Criticisms
1. The tests themselves become part of the maintenance overhead of a project.
2. The high number of passing unit tests may bring a false sense of security
3. Unexpected gaps in test coverage may exist or occur.
2. Mock Objects:
Mock objects allow you to set up predictable behavior to help you test your production code by emulating some functionality your code depends on. This might for example be a huge database which is too difficult and time consuming to maintain just for testing purposes.
Referances:
http://mockpp.sourceforge.net/ - a platform independent generic unit testing framework for C++
http://code.google.com/p/googlemock/
3. Pair programming:
It is a software development technique in which two programmers work together at one work station. One types in code while the other reviews each line of code as it is typed in.
Benefits:
* Design quality: Shorter programs, better designs, fewer bugs.
* Reduced cost of development: With bugs being a particularly expensive part of software development, especially if they're caught late in the development process, the large reduction in defect rate due to pair programming can significantly reduce software development costs.
* Learning and training: Knowledge passes easily between pair programmers: they share knowledge of the specifics of the system, and they pick up programming techniques from each other as they work.New hires quickly pick up the practices of the team through pairing.
* Overcoming difficult problems: Pairs often find that seemingly "impossible" problems become easy or even quick, or at least possible, to solve when they work together.
* Improved morale: Programmers report greater joy in their work and greater confidence that their work is correct.
* Decreased management risk: Since knowledge of the system is shared among programmers, there is less risk to management if one programmer leaves the team.
* Increased discipline and better time management: Programmers are less likely to skip writing unit tests, spend time web-surfing or on personal email,[8] or other violations of discipline, when they are working with a pair partner. The pair partner "keeps them honest".
* Resilient flow. Pairing leads to a different kind of flow than programming alone, but it does lead to flow.[citation needed] Pairing flow happens more quickly: one programmer asks the other, "What were we working on?" Pairing flow is also more resilient to interruptions: one programmer deals with the interruption while the other keeps working.
* Fewer interruptions: People are more reluctant to interrupt a pair than they are to interrupt someone working alone.
* Decreased risk of RSI: The risk of repetitive stress injury is significantly reduced, since each programmer is using a keyboard and mouse approximately half the time they were before.
4. Extreme Programming (XP):
It is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development, it advocates frequent "releases" in short development cycles (timeboxing), which is intended to improve productivity and introduce checkpoints where new customer requirements can be adopted.Benefits:
* Design quality: Shorter programs, better designs, fewer bugs.
* Reduced cost of development: With bugs being a particularly expensive part of software development, especially if they're caught late in the development process, the large reduction in defect rate due to pair programming can significantly reduce software development costs.
* Learning and training: Knowledge passes easily between pair programmers: they share knowledge of the specifics of the system, and they pick up programming techniques from each other as they work.New hires quickly pick up the practices of the team through pairing.
* Overcoming difficult problems: Pairs often find that seemingly "impossible" problems become easy or even quick, or at least possible, to solve when they work together.
* Improved morale: Programmers report greater joy in their work and greater confidence that their work is correct.
* Decreased management risk: Since knowledge of the system is shared among programmers, there is less risk to management if one programmer leaves the team.
* Increased discipline and better time management: Programmers are less likely to skip writing unit tests, spend time web-surfing or on personal email,[8] or other violations of discipline, when they are working with a pair partner. The pair partner "keeps them honest".
* Resilient flow. Pairing leads to a different kind of flow than programming alone, but it does lead to flow.[citation needed] Pairing flow happens more quickly: one programmer asks the other, "What were we working on?" Pairing flow is also more resilient to interruptions: one programmer deals with the interruption while the other keeps working.
* Fewer interruptions: People are more reluctant to interrupt a pair than they are to interrupt someone working alone.
* Decreased risk of RSI: The risk of repetitive stress injury is significantly reduced, since each programmer is using a keyboard and mouse approximately half the time they were before.
4. Extreme Programming (XP):
Other elements of Extreme Programming include:
1. programming in pairs or doing extensive code review,
2. unit testing of all code,
3. avoiding programming of features until they are actually needed,
4. a flat management structure,
5. simplicity and clarity in code,
6. expecting changes in the customer's requirements as time passes and the problem is better understood, and
7. frequent communication with the customer and among programmers.
XP attempts to reduce the cost of change by having multiple short development cycles, rather than one long one.
Criticism:
* A methodology is only as effective as the people involved, Agile does not solve this
* Often used as a means to bleed money from customers through lack of defining a deliverable
* Lack of structure and necessary documentation
* Only works with senior-level developers
* Incorporates insufficient software design
* Requires meetings at frequent intervals at enormous expense to customers
* Requires too much cultural change to adopt
* Can lead to more difficult contractual negotiations
* Can be very inefficient — if the requirements for one area of code change through various iterations, the same programming may need to be done several times over. Whereas if a plan were there to be followed, a single area of code is expected to be written once.
* Impossible to develop realistic estimates of work effort needed to provide a quote, because at the beginning of the project no one knows the entire scope/requirements
* Can increase the risk of scope creep due to the lack of detailed requirements documentation
* Agile is feature driven; non-functional quality attributes are hard to be placed as user stories
5. Scrum
Scrum is an iterative, incremental framework for project management and agile software development.
The main roles in Scrum are:
1. the “ScrumMaster”, who maintains the processes (typically in lieu of a project manager)
2. the “Product Owner”, who represents the stakeholders, represents the business
3. the “Team”, a cross-functional group of about 7 people who do the actual analysis, design, implementation, testing, etc.
During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint.[1] During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software.
Daily Scrum Meetings
Each day during the sprint, a project status meeting occurs. This is called a “daily scrum”, or “the daily standup”. This meeting has specific guidelines:
* The meeting starts precisely on time.
* All are welcome, but only “pigs” (i.e. the ones committed to the project in the Scrum process) may speak
* The meeting is timeboxed to 15 minutes
* The meeting should happen at the same location and same time every day
During the meeting, each team member answers three questions:
* What have you done since yesterday?
* What are you planning to do today?
* Do you have any problems preventing you from accomplishing your goal? (It is the role of the ScrumMaster to facilitate resolution of these impediments. Typically this should occur outside the context of the Daily Scrum so that it may stay under 15 minutes.)
Subscribe to:
Posts (Atom)