Mobile Computing and Its Effectiveness

Mobile computing has become a crucial technology in people’s lives. It provides people with various options of using technological devices regardless of their location. Mobile computing is the ability to utilize a computing device even when it is in mobile or even changing location. In this process, the aspect of portability is present. It has a range of devices from notebook computers to digital assistants like iPhone and blackberry to typical cell phones. This has made mobile computing become an indispensable way of life. Mobile notebooks and laptops can use only two types of wireless connection. The first one is WiFi which uses radio waves to transmit a signal from a wireless rooter to an immediate area. If not encrypted, anyone can have access on it. This connection is used for creating “hotspots” in public places. This means that one has to locate a hotspot. He or she should also stay within the range of a connection. The second alternative is to use a cellular broadband which uses modem or Air Card for internet connection. For this service, one has to remain stationery as the signal remains strong anywhere there is cellular service.

Another service linked with mobile computing is cloud computing, which is the ability to use website services from portable computers. Portable computing also grants access to a firm’s virtual private network (VPN) by tunneling through the internet. It makes life easy and also makes work to be manageable. There is an increase in productivity especially in developed countries where there is high use of technology. This clearly shows that mobile computing has so many positive effects to the lives of people. However, it also has its own limitations.

Advantages of mobile computing

1) Portability- devices used in mobile computing are portable thus making it easy to complete tasks. Portability means that they can be moved from a place to another. This means that one has the freedom to work from any place he or she wishes to be. This can be regarded as flexibility of location.

2) Increase in production- the returns are high because one has the freedom to work in a fixed place. A person is also in a position to multitask thus increasing productivity. It also, minimizes time wastage especially where one has to report to an office on a daily basis.

3) Marketing of products and services made easy- this is evident because one can market his or her products and online from anywhere as far as there is fast internet connection.

4) Communication made easy- calls making, text messages and emails can be sent or received anywhere using the mobile devices.

Disadvantages of Mobile Computing

1) Insufficient bandwidth- when using this process, internet connection is slower compared to when one uses a direct cable connection. There are cheap technologies available like EDGE (Enhanced Data Rates GSM Evolution), GPRS (General Packet Radio Services), HSDPA (High-Speed Downlink Packet Access) and HSUPA (High-Speed Uplink Packet Access)3G which are fast but can only be accessed within a certain range.

2) Power Usage- This affects this process of mobile computing because it relies on battery power when power connection or portable generator is not present. This means that one has to buy an expensive battery which can keep power for a long time thus increasing consumption.

3) Security standards- When a person is working in a mobile basis, he or she depends on public networks thus use of VPN (Virtual Private Network) is necessary. It is dangerous if the VPN is not carefully used because one can attack it through a massive figure of networks interconnected through the line.

4) Health problems- those people who have a tendency of using mobile gadgets when driving has a high risk of causing accidents due to divided attention. Mobile phones are also likely to get in the way of sensitive medical devices. There are some beliefs that mobile phones cause health hazards.

5) Transmission interferences- In most cases, there are so many interferences involved in mobile computing. These interferences include terrain, weather, range of the nearest signal point and others. They can interfere with the passage of signal. Transmission of signals in some buildings, tunnels and rural areas is often slow.

6) Human Interface with Devices- some components of the devices used is not user friendly. For example, some keyboards and screens tend to be small, and this makes it hard to use them. In addition, some techniques like handwriting and speech recognition require thorough training

One City Had Several Returns for Each Type of Personal Tech Device In Their Product Line

Every time I read an online review for a personal tech product I just cringe, and I must say I am skeptical regardless of who is reviewing or what they’ve said. Indeed, I’m sure you’ve heard by now that there are algorithms which can predict which product reviews are false, and which are legitimate. Unfortunately much of their strategies have been described in the personal tech news, things such as the real reviewer uses more of a personal voice rather than second or third person. Of course, those who are writing fake reviews are now adjusting them to look more like real reviews.

This reminds me much of a cat and mouse game, good guys and bad guys, and the way in which militaries of the world try to one up their enemies with the greatest technologies of the time. Still, that’s not the only problem, another one has to do with integrity level of the reviewer. How can you trust someone who is reviewing a product who lacks ethics and integrity?

Not long ago, I was consulting a small personal tech company, and as I was going through their information of product returns, as we were working on a Six Sigma process strategy, I was quite concerned with a couple of cities which had a high returns. One city had exactly 2 returns of each and every single personal tech device in their product line. This was consistent over a three-year period.

Now mind you, this was very rare for the company to have any returns, but to have exactly 2 in the same city over and over again made me stop and wonder if there was really something wrong with the product, or if people were using the product for a couple of days and then returning it. Was there a competitor, or a designer which wanted to survey these devices, and then take them back to the store? Well, the curiosity got the best of me – perhaps I should have been a CSI type crime investigator.

Anyway it turned out there were two individuals in the same city who were competing in the product review categories for personal tech devices online. Each one of them had a blog, and the corresponding blog posts matched the dates of purchase and returns of these products within two-days of each other. In other words these product reviewers were buying these personal tech devices, trying them out, and then returning them.

That’s all fine and good, but the company would have sent them devices to test without causing conflict at the retailer. And on the first point; how can you trust someone who would pull such an underhanded trick with such lack of integrity to fairly review any product – yours or the competition? Please consider all this and think on it.

Eco-Efficiency in Policy Analytics for Program Management

Although many people might see protection of IP as being counter to the ideal of a university’s mission to educate, nothing can be farther from the goal or truth of IP protection. Protection does not mean that others do not learn about the invention or discovery, it simply does allow the developer or inventor to retain the right to generate products from the invention. Additionally, because patents require a full written description of the technology, a patent is DESIGNED to advance the body of knowledge for us all. (Rondelli, 2014)

San Diego State University is closely tied to its community internally as a program manager and fund generating force as well as externally being a steward for broad community development and investment in the local environment.

The Technology Transfer program that has most bearing on San Diego State University’s environmental community is the partnership with the State of California’s Energy Commission Energy Innovations Small Grant Program.

The Energy Innovations Small Grant (EISG) Program provides up to $95,000 for hardware projects and $50,000 for modeling projects to small businesses, non-profits, individuals and academic institutions to conduct research that establishes the feasibility of new, innovative energy concepts. Research projects must target one of the PIER R&D areas, address a California energy problem, and provide a potential benefit to California electric and natural gas ratepayers. (Energy Innovations Small Grant Program, 2014)

This program is administered by Frank H. Steensnaes and is run from the San Diego State University Research Foundation’s office located at 6495 Alvarado Ct. Suite 103, San Diego, California 92182. The far-reaching effects of this program impact many academic institutions, private commercial enterprises, and civil organizations. The delivery of this program has influenced the energy and financial markets in California and throughout the country. To facilitate understanding of the regional environmental ecosystem surrounding San Diego State University, it is crucial for us to consider the broader sustainability perspective of the institution. San Diego State is dedicated to the delivery of social, ecological, and economic growth for it community and students.


Defining Sustainability

“In today’s world, the energy we use and the ways we use it are changing. For California to make the leap from the status quo to achieving climate and energy goals at the lowest possible cost, we need much more energy innovation” (Energy Innovations Small Grant Program, 2014). It is from this premise that the need to create a framework to provide program management that also adheres to eco-efficiency principles must manifest. This framework promises leadership in structure and outcomes, it also offers a paradigm for stewardship in business relationships so collaboration can move forward. It will be crucial that every step of innovation stake its inherent value to society by seamlessly adhering to social, ecological, and economic parameters.

Sustainability can be defined in terms of eco-efficiency. “In simplest terms, it means creating more goods and services with ever less use of resources, waste and pollution” (Development, 2000). By defining sustainability in this manner, we see the integration of people, planet, and profit into a workable framework. The need to provide a prescriptive methodology for adherence to specific goals and desired outcomes becomes a focal point not only for the Technology Transfer program, but also to my organization in harmonizing efforts to report and commercialize innovations.


In order to manifest our desired ending, we must first understand the state of the outcome before rushing to obtain that conclusion. The Research Foundation Technology Transfer program envisions a state of being that produces academic projects with economic value. The ability to foster innovation into real-world applications is the desired outcome. With unbridled capacity to assist students, faculty, and the community into gratifying and rewarding ventures, it is possible to provide the nurturing and profitable environments so many seek when pursuing higher education. San Diego State University is in the business of research, scholarship, and creativity. The execution of programs is measured in continued success as well as in deeper penetration into the local, national, and international arenas. Only by defining the ideal situation for the program can we begin outlining the steps in order to accomplishment. This top down approach works in many ways. The main benefit, of course, is the ability to empower participants to pursue more freedom and creativity.

By first defining the desired outcome, we can best serve the participants. Eco-efficiency promotes the concept of minimizing exploitation of scarce resources. It also examines the synergy of the built environment with the natural environment. Lastly, eco-efficiency mandates that increased economic, environmental, and equity can be monetized. By understanding lifecycle and intrinsic principles on embodied value, it is possible to reach the anticipated goals of the program. “Establishing framework conditions which foster innovation and transparency and which allow sharing responsibility among stakeholders will amplify eco-efficiency for the entire economy and deliver progress toward sustainability” (2000). By avoiding the deliberate sanctioning of paralysis by analysis, we seek to execute vision and not toil over procedure.


The manifestation of eco-efficiency in program delivery is an innovative solution to unexplored opportunities in the field. “Eco-efficiency is not limited simply to making incremental efficiency improvements in existing practices and habits. That is much too narrow a view. On the contrary, eco-efficiency should stimulate creativity and innovation in the search for new ways of doing things” (2000). In order to view the constraints of adoption of eco-efficiency, we will examine Michael Ben-Eli’s five fundamental domains for sustainable development. The five domains (Ben-Eli, 2014) are:

The Material Domain: Constitutes the basis for regulating the flow of materials and energy that underlie existence.

The Economic Domain: Provides a guiding framework for creating and managing wealth.

The Domain of Life: Provides the basis for appropriate behavior in the biosphere.

The Social Domain: Provides the basis for social interactions.

The Spiritual Domain: Identifies the necessary attitudinal orientation and provides the basis for a universal code of ethics.

It is crucial at this stage to follow Ben-Eli’s framework to ensure complicity with the underlying inherent principles of sustainability as describes in the five domains.

Domain One: The Material Domain

The intent of the first domain is to examine the use of eco-efficiency in terms of resources. Eco-efficiency “is not limited to achieving relative improvements in a company s use of resources and its prevention of pollution. It is much more about innovation and the need for change toward functional needs and service intensity, to contribute to de-coupling growth from resources” (2000). As the Technology Transfer program redefines its policies of its relationship with resource orientation, it can begin by measuring the consumption of not only its operations but also the embodied energy and matter in the delivery of its program. This examination of the flow of resources is comprehensive, but considerate of the many stakeholders and functions it actually serves.

Domain Two: The Economic Domain

In this next domain, we ask ourselves the question of bio-spheric pricing and business rationale for incorporating the eco-efficiency parameters of sustainability in the program environment. “The business case for eco-efficiency applies to every area of activity within a company – from eliminating risks and finding additional savings through to identifying opportunities and realizing them in the marketplace” (2000). This is a vital area of opportunity for the program. In what metrics can the true value of its efforts be accounted? Will a redefining of success be needed or will its stated objectives of commercializing academic projects suffice? Will the private sector permit the augmentation and redefining of economic value in its dealings with the institution? It would behoove both institutions to garner more of the true value not only in one-off interactions but also in long-term support for initiatives whether large financial rewards are part of the project. The ability to create such metrics would facilitate the recognition of this domain.

Domain Three: The Domain of Life

It is of particular importance to define the parameters of this domain. How could such a program exist if the value of all forms of life were sacrificed for material gain? Diversity is the desired outcome for the program. It requires little work to understand the power and influence that this program may have on people. When considering how to define measuring these values, it becomes a more difficult task.

The following chart reveals the correlation of economic value, quality of life, and environmental impact:

Governmental Measures and Objectives (2000, p. 25)

“Our argument is that, by adopting eco-efficient practices, it is possible to decouple these trends so that, as the dotted lines show, the economy and quality of life continue to rise while resource use and pollution fall away. Indeed, by reducing the pressure on natural resources and the environment we will actually magnify the improvement in the quality of life” (2000). This program has the real ability impact lives and should-as part of its purview consider how the monetization of academic projects not only will affect its institution, but also consider the impacts to its participants. Not only will this understanding help divert monies and resources where the really need to be, but perhaps help it the technical assistance its participants receive. This consideration also promotes the diversity the school mandates.

Domain Four: The Social Domain

The fourth domain helps expand the role and purpose of the program. By incorporating policy that ensures its participants are tolerant, believe in fundamental truths, practice inclusion, and scholarship, the program moves closer to realizing its real purpose. “All these thoughts reinforce the still fragile idea that open processes, responsive structures, plurality of expression, and the equality of all individuals ought to constitute the corner-stones of social life” (2014). When considering the existing body of work from the program, it becomes clear that no such opus is demanded or required. It would assist in the ascertainment of true value to explore this concept of personal value systems for proper alignment. Eco-efficiency describes, “In several economic sectors, considerable costs caused by environmental pollution and social damage are still not included in the price of goods and services. Until this is changed, the market will continue to send wrong signals and polluters will have no incentive to change and adapt the performance of their products and processes” (2000). Of all five domains, the opportunity for most substantive traction can occur here. By thinking more holistically about the metrics of reporting, the Technology Transfer program could make headway by implementing solution in this domain.

Domain Five: The Spiritual Domain

This last domain entails the reconciliation of love in the workplace. Oft seen as disparate concepts with zero chance of coexistence, we must find some broader universal label to encompass the idea. The concept of servant leadership can be viewed as a solution. Barbuto and Wheeler in their journal article Scale Development and Construct Clarification of Servant Leader reflect that, “The integration of servant leadership principles in practice has less to do with directing other people and more to do with serving their needs and in fostering the use of shared power in an effort to enhance effectiveness in the professional role” (2006, p. 425). Empathy is defined as “the intellectual identification with or vicarious experiencing of the feelings, thoughts, or attitudes of another” (, 2014). A real opportunity exists to affect the spiritual being of the participants of the program. It would not need to a procedural policy but a social one. Servant leaders are more likely to be involved in a functional two-way transmission of energy and data. Not only do desired outcomes become more focused based on communication, but also the real success can then be measured in personal growth and actualization.


As defined earlier, eco-efficiency promises leadership in structure and outcomes, it also offers a paradigm for stewardship in business relationships so collaboration can move forward. In terms of the program level examination, I have designed the following eco-efficient outcomes:

Use participant feedback and historic data to establish a decoupling point between the program participants and the program administrators for modification or enhancement to the EISGTTP/PIER.
Ascertain the criticality of sustainable practices within program delivery beyond funding mechanism into nontraditional economic evaluation

Several fundamental issues will be examined including the parameters of the change necessary to achieve stated program goals such as commercialization Outcomes and advancement of technology. We move from those goals to the reality of the outcomes, which measures the expansion of knowledge. This comparison would be deficient without consideration of secondary issues such as the recommendations to shape a more fulfilling program experience. Three layers of recommendations will be designed. The first layer consists of programmatic recommendations. The second layer is of non-programmatic recommendations. Lastly, we must include strategic recommendations to ensure the expansion of the accumulated body of knowledge.

I am undertaking a nine-step methodology in performing the examination for the program. The first step involves collecting technical data from the previous 13 years of the program. The next step will be to validate the data. Third step involves developing and stating a problem statement. The forth step involves conducting a root cause analysis. The fifth step involves correlating the data to the mission and vision of the program. The sixth step involves developing a corrective action plan (CAP) to mitigate, eliminate, or offset any discrepancies in the program. The seventh step analyzes the impacts of such recommendations. The eighth step involves coordinating stakeholders to share findings and receive feedback. The last step is to create a monitoring and reporting mechanism to introduce the traceability of future interactions with their clients.

In my definition of Ben-Eli’s systems thinking and cybernetics, it is clear that one must consider external and internal environment interactions in systems or grouping of systems called networks. From this point, we must consider the “regulation, adaption, and evolution” of response as defined by cybernetics to understand the relationship of first order/stagnation or and second order metabolism/diffusion of change with either open loops or closed loops of feedback. With first order change, the response only manifests itself on a combination of five possible dimensions: material, economic, domain of life, social, and spiritual (Ben-Eli, The Cybernetics of Sustainability:Definition and Underlying Principles, 2012).

As I reflect on the five dimensions, it becomes possible to implement plan of action for the overall examination of the program. Dimension 1 asks us to consider the material impacts of the program. The second dimension looks at human development in economic analysis, which is particularly salient in discovering one of the implementation designs of viewing program and individual success in terms beyond traditional financial metrics. The third dimension ensures diversity and connectedness in sustainable development. The fourth dimension seeks liberation of the individual from a global perspective. Last, the fifth dimension asks us to link spiritually in the actions of the program.

The communication process in examining the program for eco-efficiency involves a complete stakeholder analysis including a specific metric for social impact. In traditional program management, a stakeholder is anyone who can affect or can influence the program’s success or failure. The benefits of a clear communication plan include items such as management of uncertainty, scope, and change. When moving beyond the traditional definition of stakeholder, we seek the feedback of the community. As Edwards stated, “The Sustainability and Community principles encompass all the Three Es (ecology, economy, and equity) because they grapple with difficult problems whose long-term solutions require a systemic approach” (2010, p. 29).

The next issue to consider is the amount of education or training required to install a program-wide expectation of continuous learning. This issue is critical since as Orr told us, “Even if humans were able to learn more rapidly, the application of fast knowledge generates complicated problems much faster than we identify them and respond” (2011, p. 282). The institution of a set of best practices mandating a constant appraisal of worldviews while extending program benefits into long-term outcomes.

For guidance in the framework, we can seek guidance from a myriad of sources. One such resource is found in the data generated from other Program audits, reviews, and detailed annual reports of various technology university transfer offices, such as the Performance Audit of the Arizona’s Universities by the Arizona Office of the Auditor General, Report 08-02, May, 2008 and the Report of the Purdue University Office of Technology Commercialization, 2010. It is fundamental to use parametric information derived from experiences. By leveraging the compiled information, knowledge can then be extracted as best practices.

In the final structure of program examination, the importance of conveying the recommendations must be created. “To change the system so that it is sustainable and manageable” (2004, p. 3048), it is imperative to coalesce efforts of reduce redundancies while adhering to founding principles not of just the program but also to the tenants of eco-efficiency and sustainable development at large. With no procedural instruction, the malaise inherent to the human condition may diminish the productivity of environmental, economic, and social well-being of program participants. This task provides the prescriptive framework forward.

Using the Best Registry Cleaners to Their Maximum Potential

Understanding the Current Set of Best Registry Cleaners

You might wonder how you can separate the best registry cleaners from the others so you have a safe selection to work with.

You can try exposing yourself with a top listing of the best registry cleaners complete with reviews so you can learn why a particular registry cleaner gets the special mention.

Figure out what features think you may need in the future.

Although registry cleaning functions should be your top priority, it is even better to seek for something that can handle your entire computer maintenance so you can carry out maintenance operations using fewer programs.

Why it is Important to Use these Registry Cleaners to their Maximum Potential

All versions of Microsoft Windows have their own maintenance tools that can be used together to troubleshoot basic problems and keep the system up and running.

To make sure that you actually use these features, you must see if the feature is useful to you or better than an existing feature.

Things you can do with a Good Registry Cleaner

The best registry cleaners are packed with more features than the average cleaner. However, it still retains its most basic function to scan the entire registry for any errors.

Errors are usually defined as registry entries that point to files that no longer exist.

The registry cleaner attempts to fix these errors by either deleting the error or changing the value so that it properly links to a file.

The advantage of optimizing the registry is to improve system performance and increase system stability.

With a good cleaner, you will be presented with multiple scanning options. The most convenient type of scanning option is the “Automatic Scan” option which is designed to scan and fix errors in one clean sweep without any user interaction.

Registry scanners with this feature try to achieve this without performing any risky edits so all programs in the system that relies on the registry should still function.

Still, it always helps if the cleaner has a backup function so that the entries that need changing are exported to a separate file so the changes can undone if needed.

Advanced registry scans may be performed as well where the user will be presented with options on what exact areas to scan.

This is useful for slower computers where errors need to be fixed a lot quicker. It also reduces the instances of fixing false positives because you can tell it to scan limited areas of the registry.

User’s Guide

Using the registry cleaner depends on the program that you chose so it is recommended to get started with a friendly program such as Perfect Optimizer or Registry Easy.

These are just two of the best registry cleaners that have a friendly user interface so you can do quick registry scans directly from the menu.

To get started, restart your computer and load the registry cleaner. Do the fullest scan possible where every part of your registry is checked for errors.

This is the longest method of registry repair, but should fix tons of problems especially if you never performed a registry scan ever since your Windows was installed.

The registry editor should back up the changes anyway so there are no risks in doing this. Once the changes are made, restart the computer again and run your favorite applications to check if there are any improvements and to ensure that things work correctly.

If your computer suffers from specific problems like third-party toolbars invading your browser, use the browser helper objects manager to turn on Internet Explorer features that you only need.

A function that restores Internet Explorer’s settings may also be available to counter some worms.

If the performance of your system is still slow, run an optimization feature that compacts your registry.

Then use any cleaning features that it has to delete unnecessary files and other data that may violate privacy.
Why it is Important to Use these Registry Cleaners to their Maximum Potential

All versions of Microsoft Windows have their own maintenance tools that can be used together to troubleshoot basic problems and keep the system up and running.

To make sure that you actually use these features, you must see if the feature is useful to you or better than an existing feature.

Things you can do with a Good Registry Cleaner

The best registry cleaners are packed with more features than the average cleaner. However, it still retains its most basic function to scan the entire registry for any errors.

Errors are usually defined as registry entries that point to files that no longer exist.

The registry cleaner attempts to fix these errors by either deleting the error or changing the value so that it properly links to a file.

The advantage of optimizing the registry is to improve system performance and increase system stability.

With a good cleaner, you will be presented with multiple scanning options. The most convenient type of scanning option is the “Automatic Scan” option which is designed to scan and fix errors in one clean sweep without any user interaction.

Registry scanners with this feature try to achieve this without performing any risky edits so all programs in the system that relies on the registry should still function.

Still, it always helps if the cleaner has a backup function so that the entries that need changing are exported to a separate file so the changes can undone if needed.

Advanced registry scans may be performed as well where the user will be presented with options on what exact areas to scan.

This is useful for slower computers where errors need to be fixed a lot quicker. It also reduces the instances of fixing false positives because you can tell it to scan limited areas of the registry.

User’s Guide

Using the registry cleaner depends on the program that you chose so it is recommended to get started with a friendly program such as Perfect Optimizer or Registry Easy.

These are just two of the best registry cleaners that have a friendly user interface so you can do quick registry scans directly from the menu.

To get started, restart your computer and load the registry cleaner. Do the fullest scan possible where every part of your registry is checked for errors.

This is the longest method of registry repair, but should fix tons of problems especially if you never performed a registry scan ever since your Windows was installed.

The registry editor should back up the changes anyway so there are no risks in doing this. Once the changes are made, restart the computer again and run your favorite applications to check if there are any improvements and to ensure that things work correctly.

If your computer suffers from specific problems like third-party toolbars invading your browser, use the browser helper objects manager to turn on Internet Explorer features that you only need.

Current Management Opportunities and Challenges in the Software Industry

During the past 30 years the world went through a very dynamic technological transformation. In retrospective, it can be stated without exaggeration that the emergence of electronic devices and the Internet have greatly impacted daily life as well as managerial practice to an unforeseen extent. The computerization of multiple business processes and the creation of large scale databases, among many other radical technological advances, have lead to enormous cost savings and quality improvements over the years. The interconnection of financial markets through electronic means and the worldwide adoption of the Internet have greatly reduced transaction and communication costs and brought nations and cultures closer to one another than ever imaginable. Computers are now fundamental tools in almost all businesses around the world and their application and adaptation to specific business problems in the form of software development is a practice that many companies perform on their own. In the past, such computerization and automation efforts were very costly and therefore only practiced by large corporations. Over the years, however, the software industry emerged to offer off-the-shelf solutions and services to smaller companies. Today, having survived the massive dotcom crash of the year 2000, software development businesses established themselves as strong players in the technology industry.

The emergence of numerous computer standards and technologies has created many challenges and opportunities. One of the main opportunities provided by the software sector is relatively low entry barrier. Since the software business is not capital intensive, successful market entry largely depends on know-how and specific industry domain knowledge. Entrepreneurs with the right skills can relatively easily compete with large corporations and thereby pose a considerable threat to other, much larger organizations. Companies, on the other hand, need to find ways to reduce turnover and protect their intellectual property; hence, the strong knowledge dependence combined with the relatively short lifespan of computer technologies makes knowledge workers very important to the organization. Knowledge workers in this industry therefore enjoy stronger bargaining power and require a different management style and work environment than in other sectors, especially those industries that have higher market entry capital requirements. This relatively strong position of software personnel challenges human resource strategies in organizations and it also raises concerns about the protection of intellectual property.

The relatively young industry is blessed with sheer endless new opportunities, such as the ability of companies to cooperate with other organizations around the globe without interruption and incur practically no communication costs. In addition, no import tariffs exist making the transfer of software across borders very efficient; however, the industry with its craft-like professions suffers from lack of standards and quality problems. The successful management of such dynamic organizations challenges today’s managers as well as contemporary management science because traditional management styles, such as Weberian bureaucracies, seem to be unable to cope with unstable environments.

Challenges in the Software Industry

Many studies indicate that present-day software development practices are highly inefficient and wasteful (Flitman, 2003). On average, projects are only 62% efficient, which translates to a waste of 37 %. The typical software development project has the following distribution of work effort: 12% planning, 10% specification, 42% quality control, 17% implementation, and 19% software building (2003). There are many possible interpretations of the nature of this distribution of resources. First, the extraordinarily high share of 42% for quality control purposes can indicate a lack of standards and standardized work practices. This large waste of effort may also be the result of inefficient planning and specification processes. Because the share of 19% for software building is a function of software complexity, hardware, and tools used, there is a chance to reduce it by carefully managing and standardizing internal work processes. The disappointing share of only 17% for implementation, however, should be alarming to business owners, since implementation activities are the main activity that results in revenue. The relatively low productivity level reported by Flitman (2003) seems to be also reflected in the fact that the average U.S. programmer produces approximately 7,700 lines of code per year, which translates to just 33 per workday (Slavova, 2000). Considering that a large software project, such as Microsoft Word, is reported by Microsoft to require 2 to 3 million lines of code, it becomes obvious how costly such projects can become and that productivity and quality management are major concerns to today’s software businesses. The challenge for contemporary software managers is to find the root of the productivity problem and a remedy in the form of a management practice.

A plethora of recent studies addresses software development productivity and quality concerns. Elliott, Dawson, and Edwards (2007) conclude that there is a lack of quality skills in current organizations. Furthermore, the researchers put partial blame on prevailing organizational cultures, which can lead to counterproductive work habits. Of the main problems identified, project documentation was found to be lacking because documents are deficient in detail and not updated frequent enough. Quality control in the form of software testing is not practiced as often and there seems to be a lack of quality assurance processes to ensure that software is built with quality in mind from the beginning. Organizational culture was found to be deficient in companies were workers tend to avoid confrontation and therefore avoid product tests altogether (2007).

Since knowledge workers are the main drive in software organizations, creating a fruitful and efficient organizational culture constitutes a main challenge to today’s managers. The relationship between organizational culture and quality and productivity in software businesses was recently investigated by Mathew (2007). Software organizations tend to be people-centered and their dependency on knowledge workers is also reflected by the enormous spending remuneration and benefits of more than 50% of revenue. As the industry matures and grows further, the challenge to organizations is that larger number of employees need to be managed which brings culture to the focus of management. Mathew (2007) found that the most important influence on productivity was achieved by creating an environment of mutual trust. Higher levels of trust lead to greater employee autonomy and empowerment, which strengthened the existing management view that trust and organizational effectiveness are highly related. Those companies with higher trust and empowerment levels benefitted from more intensive employee involvement and thereby achieved better quality products (2007).

Product quality, however, depends on other factors as well that reach beyond the discussion of work processes. Relatively high employee turnover was found to have a detrimental effect on product quality and organizational culture (Hamid & Tarek, 1992). Constant turnover and succession increase project completion costs, cause considerable delays, and expose organization to higher risks because their development processes can be severely disrupted. While human resources strategies should help find ways to retain key personnel in the company, organizations need to nevertheless be prepared for turnovers and minimize their risks. One of the greatest risks for people-centered, knowledge worker organizations is the loss of knowledge when employees leave.

Knowledge management has evolved into a relatively new discipline in the last two decades but is mostly practiced by large, global organizations only (Mehta, 2008). As corporations realized the importance of knowledge management activities to mitigate the risk of know-how loss within their organizations, they started employing chief knowledge officers and crews with the goal of collecting and organizing information. By building custom knowledge management platforms, companies can benefit from increased transfer, storage, and availability of critical business information. Such activities can help companies innovate and build knowledge capital over time (2008). The challenge remains, however, to set up such systems and to elicit employee support for knowledge management systems. In addition, these systems leave another critical question open. What happens when top performers take all the knowledge with them when they leave?

Another crucial variable affecting software product and service quality is top management involvement. Projects in the software industry commonly fail due to one or a combination of the following three major causes: poor project planning, a weak business case, and lack of top management support and involvement (Zwikael, 2008). Software projects are similar to projects in other industries by focusing on timely project completion, budget, and compliance to specifications, the industry requires specific support processes from top management to facilitate projects. These processes are summarized in Table 1. Key support processes, such as the appropriate assignment of project managers and the existence of project success measurement, indicate that successful companies demonstrate a higher level of project progress control than others; however, Zwikael acknowledges that top managers rarely focus on these key processes and instead prefer to deal with those processes that are easier for them to work on personally.

Table 1

The ten most critical top management support processes in the software sector (Zwikael, 2008). Those processes marked with an asterisk (*) were found to be the most important.

Support Process

Appropriate project manager assignment *

Refreshing project procedures

Involvement of the project manager during initiation stage

Communication between the project manager and the organization *

Existence of project success measurement *

Supportive project organizational structure

Existence of interactive interdepartmental project groups *

Organizational projects resource planning

Project management office involvement

Use of standard project management software *

Opportunities in the Software Industry

The advent of low cost communication via the Internet and the diversification of the software industry into many different branches brought a multitude of new market opportunities. Some of the main opportunities are rooted in the low costs of communication, while others originated from the possibility of geographic diversification and international collaboration.

One major opportunity which especially larger organizations seek to seize is geographic diversification in the form of globally distributed software development. Kotlarsky, Oshri, van Hillegersberg, and Kumar (2007) have researched this source of opportunities that is mainly practiced by multinational companies; however, an increasing number of small companies is also reported to be benefitting from dispersed software development across national boundaries. The study revealed that software companies can achieve significantly higher levels of productivity by creating reusable software components and reducing task interdependencies. By reducing interdependence, the produced modules are more likely to become useful in future projects on their own; furthermore, this reduction of intertwined computer code also has a positive effect on project teams. Teams in companies that globally distribute their developments benefit from increased autonomy and reduced communication requirements. The authors point out, however, that the prerequisites to distributing software development are not only good project planning but also the standardization of tools and development procedures. Without such prearrangements it may become almost impossible to manage and consolidate the various distributed team activities (2007). Especially for teams working across countries away from one another, it may pay off to deploy video or other Internet-based conferencing technologies and exploit huge savings potentials. But are these means of communication effective?

In the last decade a new form of organization has emerged that has taken the most advantage of the Internet. Virtual organizations exist entirely in cyberspace and their team members communicate mostly, if not exclusively, via the Internet using webcams and messaging software. The challenge for managers in virtual organizations is to exploit the new technology but also to find ways to motivate and direct the workforce and work processes. A study by Andres (2002) compared virtual software development teams with face-to-face teams and identified several challenges and opportunities for virtual managers. Managing work from a different time zone can be problematic due to the lack of physical presence. Communication will need to be asynchronous or can only occur at work hours that overlap in both time zones. Virtual teams facilitate this process by using email and voice/text messaging but more importantly by reducing the interdependency of tasks. Andres (2002) suggested that these types of communication have lower “social presence” meaning that humans have a need and ability to feel the presence of others in the group. The problem with many computerized communication channels is that visual clues, utterances, body language clues and clues from the person’s voice are missing. When placed on a social presence continuum, the various communication types rank as follows from the lowest to the highest: email, phone, video conferencing, and face-to-face meetings. Andres’ comparison between development teams using video-conferencing versus face-to-face meetings revealed that the latter group was far more efficient and productive, even though the video-conferencing team benefitted from reduced travel costs and time.

The study conducted in 2002, however, has several shortcomings. First, it is already seven years old and Internet costs have dropped and speeds have improved significantly since then. Considering the improvements in video quality and availability and computer speeds, this form of communication became more feasible recently. In addition, today’s managers are just now starting to learn how to use these means of communication efficiently. For example, even though email technology has been around for two decades now, many managers still find that emails can create a lot of ambiguity. The challenge to future generations of managers will be to change their writing style to match the limitations of email and other text messaging technologies. Another important factor to consider is that written communication may be stored indefinitely and have legal consequences; hence, more often than not, managers may intentionally prefer to avoid such communication channels for political or legal reasons. The study by Andres (2002), however, resulted in a negative view of video conferencing probably because the technology was not yet matured and the team members were not yet comfortable with it.

For video conferencing to work well, all participants need to be knowledgeable of the peculiar characteristics of that technology and adjust their communication style and speech accordingly. Regardless of meeting type, another important factor is preparation. What could be researched in conjunction with Andres’ study in the future is the degree of preparation of the group. Do team members invest enough time in preparing questions and answers for their teammates before coming to the meeting? Video conferences may require more preparation than face-to-face meetings in some circumstances.

Another opportunity for software businesses and challenge for managers worldwide is outsourcing. In the year 2007, $70 billion were spent globally for outsourced software development (Scott, 2007). Given the extreme shortage of IT skills in the U.S. and Europe, many companies take advantage of globalization by choosing international suppliers for their software development tasks. Outsourcing, however, requires elaborate coordination between the organization and its many supplier groups. The idea is that in total, coordination costs and problems are less costly than in-house development; however, this goal is not always achieved. While outsourcing, when it is deployed and coordinated correctly, can result in 24 hour development worldwide and thereby provide continuous services to the organization around the clock, it may result in the loss of intellectual property. While mechanic parts are patentable in most countries that support intellectual property rights, software is not patentable in most countries outside North America.

In addition to the challenge of managing outsourcing, software organizations exploit technologies in various ways to save costs, for example by offering remote access, telecommuting, and service-oriented architectures (SOA) (Scott, 2007). Remote access and telecommuting has increased six-fold between 1997 and 2005 and resulted in $300 million annual savings due to a reduction of office space (2007). SOA is a similar concept and involves a software rental for customers. Instead of buying, installing, and maintaining software and servers, customers can rent a service online and reduce the total cost of ownership because these activities are no longer required on the customer side. Gradually the virtualization of the software business opens new horizons and provides further opportunities but it also presents managers with endless challenges.

Some of the strengths and weaknesses of offshore and virtual team development were studied by Slavova (2000). In the year 2000, India and Ireland were the largest offshore software development locations. Offshore companies can offer up to 60% cost reduction, a faster completion of development tasks by distributing them around the globe, and specific domain knowledge which they acquired over the years providing similar services to other customers. The integration of work from external sources, however, constitutes a major hurdle. Furthermore, language and cultural issues can cause serious communication problems that put the project at risk, especially when misunderstandings cause misinterpretations of project specification documents. Slavova (2000) found that the most common remedy and strategy avoiding problems with offshore suppliers is to visit them frequently face-to-face; however, this tactic results in higher travel costs and disruptions of the managers’ workflows and hence may offset the benefits gained for outsourcing altogether. Managers in the software business need therefore to balance the risks and opportunity potentials before engaging in outsourcing because for many companies this strategy failed to pay off in the end.

A huge opportunity that emerged in the last decade is online innovation. The collective innovation effort of many individuals and companies is generally known as open-source on the Internet and it has lead to many advances in the computer technology, such as the free Linux operating system. At first businesses felt threatened by this wave of developments on the market because the businesses perceived that open-source solutions were in competition with their products. In many cases this was and still is in fact true; however, a couple of companies, including IBM, are exploiting this new way of innovation for their own and for a common benefit (Vujovic & Ulhøi, 2008). Because software companies operate in an increasingly instable environment, they struggle to create continuously new and better products. By exposing the computer code to the public on the Internet, companies can benefit from ideas submitted by the public, especially other companies. Furthermore, companies benefit from free bug finding and testing by external users but one of the primary reasons for “going open-source” is the quick adoption and spread of the company’s technology at a relatively little or no cost. The spread of IBM’s open-source technology, for example, is also free marketing for the company. But how can companies make money by offering something for free?

The closed innovation model (the traditional model of providing software without revealing the software code) can be combined with open-source, so the company can charge for the product. In other cases, the company can reveal the technological platform on the Internet for free and then sell specialized tools which utilize the new platform. The big money savers are obviously the shared development, testing, and maintenance costs since many interested parties work on the same project.

The knowledge-sharing model of open-source is nothing new, however. The philosophy and the benefits of open innovation models have been already realized in the third quarter of the nineteenth century. Back then, open innovation was practiced in the UK iron and

US steel industry. The cooperation of many industry players ended the domination of proprietary technologies for which costly royalties were due (Vujovic & Ulhøi, 2008). Given the dynamic environment of the IT industry and the short lifespan of computer technologies, the adoption of open innovation models gained much more popularity. By analyzing the largest open-source players in the market, Vujovic and Ulhøi put together a list of supportive strategies, which is shown in Table 2. Several of these strategies are quite relevant from a top management perspective as well, such as deploying open-source to block a competitor and using the open model as a gateway for greater market share.

Table 2

Strategies for adopting the open-source approach (Vujovic & Ulhøi, 2008).

Business Strategy

Obtaining higher market share

Obtaining market power

Better adoption of a product and thereby establishing standards

Shifting competitive advantage to another architectural layer

Making the product more ubiquitous

Delivering faster time-to-market

Spurring innovation

Complementing a revenue core stream

Blocking a competitor


Reviewing the rather recent emergence of the IT industry and the software industry in particular, several parallels can be drawn to management history. While Taylor’s scientific management was a highlight in the evolution of management science (Wren, 2005), the software industry seems to be lagging behind such great advancement. Due to its high level of complexity, the software development discipline is still plagued with quality problems stemming from a lack of standardization. Similar to Taylor’s efforts, managers need to analyze software development processes and develop industry-wide standards and measures. Once such measures and procedures exist, this will help make software projects much more predictable.

Much of today’s software industry practices would have been a déjà vu for Taylor, if he was still alive. In addition, the anomie and social disorganization concerns during the social person era apply today more dramatically than in the past. Mayo described in the 1940s how managers overemphasized on technical problems in the hope of raising efficiency ignoring the human social element (p. 296). The same situation is now evident to a larger degree in the computer industry. The rapid technological advances have created many opportunities and changed the work environment drastically. At the same time, however, management was unable to prepare for these dramatic shifts technology would bring to the workplace. At best, managers are simply reacting to technological advances because the consequences are mostly unpredictable given the complexity of human nature. For example, email brought several benefits such as low cost and simple asynchronous communication; however, many email messages are misunderstood because they are not written appropriately. Moreover, IT knowledge workers are struggling to keep up with the vast number of messages received per day as they constitute a severe disruption of the daily workflow.

As knowledge workers are becoming more and more essential to an organization’s survival and as organizations in this industry mature and require greater headcounts, the span of control is becoming an issue for managers to handle correctly. As discussed in Wren (2005), as the team size increases, the number of interrelations to be managed rises astronomically (p. 353). Managing larger teams poses a great problem because the sheer number of interrelations makes it also more difficult to develop trust within the team. Motivating large groups of knowledge workers can hence be tricky, especially because creative tasks can require a large degree of collaboration. Work design is hence a major hurdle for future managers to overcome. Much emphasis has been on hygiene factors and not on motivators of the workforce. Flexible hours, telecommuting, empowerment, and increased responsibility may help in the short-term but for the long-term management will need to find new strategies for retaining knowledge workers.

Product quality remains a big issue. Deming’s ideas are good but quality assurance in the software world is difficult to implement due to the lack of standards and measures. The open-source innovation model may provide some relief in this respect because the greater involvement of external developers can help improve overall quality. On the other hand, however, open-source projects are hard to manage for the same reason. Since open-source projects are self-directed and not owned by anyone in particular, those projects sometimes suffer from uncontrolled, tumorlike growth.

Several of Deming’s deadly sins (Wren, 2005, p. 463) apply directly to the software industry. Most products are made from scratch rather than from components and there is little standardization in software organizations. Since software developers have a tendency to see their job as a craft they defy standards and procedures. In addition, the rather complex environment with its dynamic requirements and the push for meeting deadlines make it easy for practitioners to lose sight of quality improvements through the preparation of organizational standards. High turnover and individual performance measures continue to be industry practice, even though many scientists, such as Deming, have argued for long that such measures are counterproductive.

Future managers need to find ways to compensate for the high turnover, if they cannot find a way to avoid it. The division of labor might work well for the company but it is not well perceived by the workforce which tends to require constant challenge. Top performers disfavor mundane tasks and prefer to walk away with all their knowledge. IBM has successfully deployed job enlargement for some time to combat this phenomenon (Wren, 2005, p.332). Unfortunately, this strategy might not work for every company and it can only be used within certain boundaries of the organization. Given the developments of the last two decades, managers will need to confront the discipline of knowledge worker management and find a workable solution for their organization.

The integration of management science with the advances in psychology and sociology may provide a route towards the solution of the knowledge worker management problem. It is crucial for managers to have an accurate understanding of the motivational drives for this particular group of the workforce. These employees enjoy higher income, greater flexibility and freedom, and greater bargain power. This puts them in a gray zone between the traditional, lower skilled employee and an owner in the company because knowledge workers create intellectual capital in the company. Because most of this capital is lost and remains with the employees when they decide to leave the organization, turnover can be much more damaging than with traditional workers. Managers can therefore not simply apply conventional strategies to this dissimilar group of employees; rather, they need to seek for more creative incentives for motivating and retaining knowledge workers.