Archive for the ‘Software Engineering’ Category

Some people think software bloat is good. Here is this quote citing one reason.

[…] there are lots of great reasons for bloatware. For one, if programmers don’t have to worry about how large their code is, they can ship it sooner. […] If your software vendor stops, before shipping, and spends two months squeezing the code down to make it 50% smaller, the net benefit to you is going to be imperceptible. […] But the loss to you of waiting an extra two months for the new version is perceptible, and the loss to the software company that has to give up two months of sales is even worse.

A lot of software developers are seduced by the old “80/20” rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.

Unfortunately, it’s never the same 20%. Everybody uses a different set of features. […]

When you start marketing your “lite” product, and you tell people, “hey, it’s lite, only 1MB,” they tend to be very happy, then they ask you if it has their crucial feature, and it doesn’t, so they don’t buy your product.

If you think software bloat is good I’d like to know other reasons for that. If there are lots of great reasons for bloatware then it is only a matter of listing those reasons.

Read Full Post »

I think software bloat is not good.

Basically because of this comment I made on this answer to the causes of software bloat:

It is one thing “if programmers don’t have to worry about how large their code is” when writing only the necessary and right code, and a very different thing having programmers carelessly write and add code which will unnecessarily increase the size of a program just for the sake of shipping sooner. But code size is NOT really the problem; the problem is that most if not all bloated programs are inefficient, slow, buggy, unreliable, frequently crash, cause a lot of inconveniences and frustrations to users, or cause fatalities. Bloatware is bad. Want to ship sooner? Write lean programs.

Shipping sooner should not be a reason, much less a great reason, for bloatware; if shipping sooner causes bloatware, that doesn’t make bloatware a good thing to have.

I don’t think I’m far off with the reasons I gave but, I would like to know if more people think software bloat is not good and the reasons why is not good.

Read Full Post »

Before we look at a multi-threaded program example, it is very, very important to have a clear understanding of the implications of introducing multi-threading in our program(s); we must understand when it makes sense and when it’s appropriate having additional thread(s) in our applications; this is critical because it may lead to real faster [parallel] execution or, we end up with a program that not only takes more time than it would having only one thread but, also impacts the overall performance due to overhead and synchronization switching among multiple execution contexts.

One of the simplest things we should be clear is that multi-threading per se does not mean programs run faster. Implementing multi-threads where we shouldn’t can actually cause opposite effects. Not all programs or tasks may be suitable for multi-threading implementations.

It makes sense having an additional thread when performing these tasks:

– Lengthy operations in the background.

– CPU intensive or time consuming operations.

– Tasks that would otherwise tie up our application interaction.

Having an additional thread to perform those operations in the background frees the main thread and keeps our application responsive to user interaction. A good example of this is Apple’s iTunes software; if you start playing a song you can keep doing things like downloading a podcast and synchronizing or visiting the app store while the song plays.

The following list is specific to the Microsoft .NET Framework and is not exhaustive but, it lists basic concepts we want to be aware of before writing our first multi-threaded programs. Many of these concepts apply in other programming environments as well and it is a good starting point to understand multi-thread programming.

1. A thread is an independent execution context, and it runs simultaneously along other threads.

2. A program starts in a single thread, created automatically by the CLR and operating system, the “main” thread.

3. A program becomes multi-threaded by creating additional threads.

4. The CLR assigns each thread its own memory stack so that local variables are kept separate.

5. Threads share data if they reference the same object instance.

6. Static variables offer another way to share data between threads.

7. Static variables are shared between all threads.

8. When reading/writing shared variables, only one thread should execute that code at a time.

9. Use an exclusive lock while reading/writing shared variables.

10. Code that is protected using an exclusive lock is called thread-safe.

11. When two threads simultaneously contend a lock, one thread waits, or blocks, until the lock becomes available.

12. Temporarily pausing, or blocking, is an essential feature in coordinating, or synchronizing the activities of threads.

13. A thread, while blocked, does not consume CPU time.

14. Multi-threading is managed internally by a thread scheduler.

15. On a single-core computer, a thread scheduler performs time-slicing – rapidly switching execution between each one of the active threads.

16. A time-slice is typically in the tens-of-milliseconds, much larger than the CPU overhead in switching contexts, typically a few-microseconds.

17. On a multi-core or multi-processor computer, multithreading is implemented with a mixture of time-slicing and genuine concurrency – where different threads run code simultaneously on different CPUs.

18. A thread has no control over when and where its execution is interrupted.

19. A common application for multi-threading is performing time-consuming tasks in the background.

20. In a non-UI application, such as a Windows Service, multi-threading makes particular sense when a task is potentially time-consuming because it is waiting for a response from another computer (such as an application server, database server, or client).

21. Another use for multi-threading is in methods that perform intensive calculations (CPU intensive).

22. An application can become multi-threaded in two ways: either by explicitly creating and running additional threads, or using a feature of the .NET framework that implicitly creates threads – such as BackgroundWorker, thread pooling, or a threading timer.

23. Having multiple threads does not in itself create complexity; it’s the interaction between the threads that creates complexity.

24. When heavy disk I/O is involved, it can be faster to have just one or two workers thread performing tasks in sequence, rather than having a multitude of threads each executing a task at the same time.

25. A thread, once ended, cannot be re-started.

26. By default, threads are foreground threads, meaning they keep the application alive for as long as any one of them is running.

27. Background threads don’t keep the application alive on their own, terminating immediately once all foreground threads have ended.

28. Changing a thread from foreground to background doesn’t change its priority or status within the CPU scheduler in any way.

29. Thread priority becomes relevant only when multiple threads are simultaneously active. public enum ThreadPriority { Lowest, BelowNormal, Normal, AboveNormal, Highest }

30. From .NET 2.0 onwards, an unhandled exception on any thread shuts down the whole application, meaning ignoring the exception is generally not an option. Hence a try/catch block is required in every thread entry method – at least in production applications – in order to avoid unwanted application shutdown in case of an unhandled exception.

Keep this list handy, it will help you remember things at the time of writing your multi-threaded programs.

If we wanted to chew and whistle at the same time, we would need two mouths not two cores. Can dryer and washer do work in parallel? Yes they can. On the same load? Obviously not.

For further reading about .NET Framework threads a very good reference is CodeNotes for VB.NET by Gregory Brill, Random House; pp. 74-85. For a more recent presentation about threads in the .NET Framework go to MSDN Managed Threading http://msdn.microsoft.com/en-us/library/3e8s7xdd.aspx and Managed Threading Best Practices http://msdn.microsoft.com/en-us/library/1c9txz50.aspx

Questo que lotro, salud!

Read Full Post »


Characters in a computer system are stored using numeric codes. Characters in a computer system involve any alphanumeric character, punctuation characters, symbols like dollar or pound sign, and special non-printing control characters like ENTER.

The ASCII (American Standard Code for Information Interchange) code was created on the early 60’s to represent all characters in the English alphabet and it is referred to as a character set. Initially created using 7-bit codes, ASCII represented up to 128 characters, starting from code 0 up to code 127. ASCII was later extended to use 8-bit codes, and was capable of representing up to 256 characters, from code 0 up to code 255. The extended characters included letters which are not part of the English alphabet like ñ.

Clearly ASCII faced severe limitations when alphabets other than English had to be represented, like Chinese, Japanese, Arabic, Russian, and others where the alphabet has many more characters and are very different from those in the English alphabet.


A number of standards and character sets have been created to handle the limitations imposed by single octet coded sets like ASCII. The Unicode standard, introduced in the early 90’s by the Unicode Consortium, is a fairly recent character set designed to support not only the languages mentioned above but many other languages of the world. The Unicode standard, developed in parallel with the International Standard ISO/IEC10646 (also known as the Universal Character Set), identifies each character in the set by an unambiguous name and a positive integer number called its code point. Instead of mapping characters into single octets, Unicode separately defines what characters are available, how it maps each to a unique code point, and how it encodes those numbers.

The Unicode standard can potentially support over 1 million characters, each mapped to a code point between 0 and 1,114,112. This allows computers and electronic communication devices to represent and store several other alphabets like Latin, Greek, Hebrew, including ancient and modern alphabets. Introduced in 2011, Unicode 6.0 is the most current version of the standard.

One of the advantages of the Unicode standard is that Unicode’s first 256 code points correspond with those of ISO 8859-1, the most popular 8-bit character encoding for Western European languages. As a result, the first 128 characters are also identical to ASCII.

So Where’s the Catch?

Software the runs on computers have to “marry” with one character set but also support other character sets. You can see that in operating systems, communications software, software tools, as well as in applications.

Software that runs on computers do so using several different languages around the world. Electronic documents like web pages or PDF documents are written in several different languages. If you want to be able to read those documents written in a language other than English or German or French, your computer system should support those other character encodings so that the content is correctly presented.

Добро пожаловать, Мария Шарапова любит теннис

The previous headline says ‘Welcome, Maria Sharapova likes tennis’ in Russian language, which uses the Cyrillic alphabet. If your computer system or web browser default character set supports the Cyrillic alphabet, you should be able to see something like:

If the default character set of your system or web browser does not include support for the Cyrillic alphabet then you will see some garbled text; check the text encoding used by your browser or your system (go to the View menu).

Typically, different computer operating systems use different default character sets, and usually have different ways of specifying the default character set to be used. When documents are created in different computer systems and are using different character sets, documents created in one system may not display text properly in the other system or may not display the text at all. For instance, if you want to read the Adobe Reader (PDF) version of the manual of your Japanese branded digital camera, your system needs to include support for the Japanese characters so that text will display correctly.

When computer systems are using different character sets but these character sets are compatible then only some characters may not display correctly; this usually happens when the character exists in one character set but not in the other, or when the character exits in both sets but, their numeric code is different or the character is represented by a different number of octets. This could be the case when visiting a website created in the German language where, only some characters are not found in the English alphabet.

Fortunately, many computer programs and software automatically translate characters behind the scenes when different character encodings are used between computer systems; we don’t even notice that’s occurring and we can happily read those documents.

You must be kidding! How did you type those letters!? We don’t have keyboards with the Russian alphabet (here in the U.S.) I don’t know that but, I only went to Settings, General, Keyboard, International Keyboards, in my iPad (get one if don’t have one) and added the Russian keyboard. Here it is.

Questo, que lotro, sănătate!


ASCII http://en.wikipedia.org/wiki/ASCII
Character Encoding http://en.wikipedia.org/wiki/Character_encoding
Unicode Consortium http://www.unicode.org/
Universal Character Set http://en.wikipedia.org/wiki/Universal_Character_Set
Another early character encoding http://en.wikipedia.org/wiki/EBCDIC


Read Full Post »

Whenever we create our first program we have no idea about what multithreading is or what a thread is for that matter, and no one cares explaining it at the time, or so it seems. However, whenever we create or first program we are already programming using threads, yes one thread at least.

A thread, in programming, is not a physical thing or something inside the computer. It is jut a term to refer to a programming concept or technique.

When we run our first best program, the operating system loads it (accommodates our program in memory), then the computer starts executing its instructions one after another, one at a time, and if there are no more instructions to execute the computer finishes execution, and our program terminates. Done deal, the program ended. So what?

Let me see if this explains it…. That was a thread of execution. Some people also call it a context of execution.

Typically we will add a loop in our program so that the computer keeps it loaded and keeps “running” those instructions inside the loop. Whenever the user decides to leave the program we would break out of the loop and our program would then terminate and exit. That’s it. Only one executing thread but a continuously running thread.

How did I know a program is using one thread? I didn’t. It turns out that when the operating system loads our program it will usually create a process for it, and then it creates a thread or context to execute whatever we wrote in our program; this thread is usually called the main thread or the main context of execution. That easy…, thread programming! Single-thread programming if you wish.

So what?? Well, if we want to do multi-threading or multi-thread programming, we have to create any additional threads programmatically (manually); the operating system will create only one for us, the main one but, any additional threads have to be created by the programmer. Don’t worry, you don’t have to pay a penny to create additional threads in your program. Additional threads are free… Cool!

Ok I see it….. If I create an additional thread in my program then I am doing multi-threading programming. Yep.

And why would I want to create an additional thread in my program?

There are many reasons for multi-threading but let me mention just one. Often times we have to process long running or time consuming tasks in our program (for instance, downloading big files) but, we don’t want to have users waiting for that long running task to finish and being unable using our program. Thus, we want to have the computer execute time consuming tasks in a separate thread of execution; that way users can keep working with the main program at the same time a file is downloading.

If we want to create an additional thread in our program, we have to provide the starting point of execution for that thread, which is usually the name of a function in our program that we want that thread to execute.

Then again…..

When we create our first best thread in our formerly single-threaded program, the operating system will accommodate the additional thread in memory, and the computer will start executing the instructions contained in the function we specified when we give it the go ahead (usually calling Start()), one after another, one at a time, and if there are no more instructions to execute in our function the context of execution terminates. Done deal, the thread is finished and very likely disposed of by the operating system. Thank you for participating!

Yep; if we do not write a loop (which we would usually do for most long processing tasks) in our ‘to be executed by an additional thread’ function, the computer will run whatever there’s inside that function until the end and the thread will end. That’s it; one less thread in our program. This means two ways of ending your threads; one is by not having a loop in the function executed by the thread; second one is by having a loop in the function and breaking out of the loop when a certain condition is satisfied, which will end the thread automatically. As far as I know, programming languages that support multi-threading have no function to stop a running thread. Stopping a running thread will depend on whatever we write inside the function executed by the thread.

Starting the execution of an additional thread is not the same as calling a function in our program, nope; starting the execution of an additional thread in our program is starting execution of an additional context “in parallel” (or so it seems) with our main context. The computer will continue with the execution of our main thread right after we start our additional thread; at this point the computer will be running the two threads in our program, two execution contexts, the main context and our new thread context. Wonderful! And the operating system will take care of switching execution contexts back and forth from the main thread to the new thread; we as programmers don’t have to worry about switching between threads after we have started additional threads in our program. Fantastic!

Note that if there’s no loop in our main thread and we created an additional thread having a loop, our main program will end executing and our extra thread with it, most likely; the garbage collector will probably take care of that additional thread once its own loop ends, but who knows what will happen; I don’t know. It will depend on the programming system you are using.

The “complexity” of multi-threading starts with the “execution in parallel” part. For one, you the programmer is responsible of avoiding concurrency conflicts when your different threads try to change or access the value of a ‘common to all threads variable’ (commonly known as global variable); remember our threads are “executing in parallel” and we won’t know when and in what order the computer will switch execution context (switching execution from one thread to another thread). Programming languages that support multi-threading have functions that we can use to synchronize our threads, that is, having our threads access global variables in a synchronized manner. However, your program and your threads should run fine if you manage to keep your threads (contexts of execution) from touching other threads’ belongings, or if you keep your threads in sync when touching ‘common to all threads’ variables, also known as static variables.

I think that’s very much all there is to understand multi-threading (remember it is “executing in parallel”). If  you are creating your first program and you understood this explanation you should be able to create multi-threading programs from now on. Good luck.

Questo, que lotro…, salud!

Read Full Post »

It seems my earlier “clarifications” fell really short and did not give a complete picture regarding the wireless network issues. Here is more…

1) With my earlier post I was not trying to shiFt the blame away from AT&T for the network issues or dropped calls. My intention was/is laying out the elements involved (technical mostly) in wireless comms so that we may be able to suggest solutions or have a more informed opinion. But if someone still insists in criticizing, do criticize the right place having a better and clearer picture of what’s going on.

2) My earlier “the wireless network is a huge huge collection of hardware” fell very short of how complex the wireless network can be because it is not only about size (and huge size should not justify crappiness) it also includes quality, and usually more than one wireless carrier is involved; see the comment Gregg Thurmanover posted on macdailynews.com regarding the New York Times article “AT&T Takes the Blame, Even for the iPhone’s Faults”:

“The big problem with user experience is that they haven’t a clue why they are having a problem. They use ATT but the person they are talking to uses Verizon (or something else). Is the dropped call ATT’s fault, or is the fault at the other end? Without proper recording and analysis technologies, with complete disclosure by all the carriers, NOBODY knows where the problems exist.”

The reality is AT&T cell phone users connect/call not only AT&T cell phone users but also Verizon, or T-Mobile, or Sprint cell phone users. This makes “blame AT&T” the more difficult to assert (looks like diversifying the blame would be more accurate; an instance would be the iPhone 3G radio issue suggested in the NYT article, which is part of the hardware networking devices I talked about in my earlier post).

Reality means, every cell phone call (signal) travels through a [usually] big number of different and diverse wireless networks (and at times wired as well if your call goes to/comes from land line) before it reaches the other end (and this is sometimes true even if calling an in-network user).

It also means.., let’s say we are in New York cell phone calling a friend in San Francisco, whose carrier is different than ours, in a nice day for the continental US and the call is dropped (frustrating it is). Who, what do we blame? Man, it is very difficult to know what happened in between and why. I wouldn’t know and couldn’t explain. But we very easily throw the blame to our carrier because it is the “visible” suspect, who else could we blame? Understandable but honestly, we have no clue of what happened much less who is to blame.

Technical note: In this scenario of so far away calling, your call likely travelled wirelessly within your city/region portion. After that, we don’t know how it went to SF, although most likely fiber optic (or other wired medium), or it could have gone mixed travel, wireless and wired. I, as a user, don’t know how my call was carried, and I don’t care, I just want NO fumble. And the same “could have gone mixed travel” pattern very much applies for a wireless call within the same city same carrier.

3) Some say images help to understand things. Has anyone seen how the wireless networkS (yes in plural) look like from atop? Not me. Not like a spider web; spider web seems too structured for a wireless network; probably on some cities/places only. Like a fish men net (similar to our interstate highway system)? Maybe on some places of the country. Concentric circles? Not likely but who knows. The image I tend to lean most is the one of a collage of [mostly unstructured] networks sewn together.

4) We have to accept the diversity of our landscape. The wireless network solution for a [New York] city is not and will not be the same for places like The Grand Canyon. Means the wireless network is as diverse as our landscape; building a network in some places will be nearly impossible, some places do not financially justify building it at all, and in some places is not even necessary to build one. These factors perhaps play part in an assessment for a carrier to determine whether to build or not to build a wireless network. This is for those talking/complaining about “poor coverage”.

5) What do we want out of the wireless network? If we see the network pipes as threads this is what we want:

  • Larger/longer threads – greater coverage.
  • Better threads – reliability (lesser drops, lesser noise).
  • Thicker threads – capacity (bandwidth), more “water” in less time.

So what with all these clarifications? If you want to blame your carrier less frequently…

  • Try to stick with that carrier that has the fewer networks sewn (if you can determine that).
  • Try to stick with that carrier having/building/upgrading better technology for the wireless network. A simple example from the wired world… For my land line phone service I would subscribe with the company using fiber optic for the wires over the one that still using copper; the company using fiber optic will likely have less trouble carrying my call and it will connect me faster. I don’t know much about the different wireless technologies but the criteria holds, go with the one having better technology for carrying wireless signals (the larger network with “same-kind” technology – coverage, the more resistant to obstacles – strength, greater capacity – faster). An instance would be  choosing 3G over EDGE.
  • Try to go for the carrier with LARGER/GREATER self-owned wireless network (if you can determine that).
  • If you travel abroad frequently it won’t matter what home carrier you pick, your call will usually be carried by multiple carriers in different countries to the other end. In this case you may want to go with the lesser cost for the wireless service.
  • If there is only one carrier in your area and cannot live without cell phone and cannot move then you are in a pickle and yes, you can keep blaming your carrier.

In summary…, because of the complexity of the wireless network and the inherent weaknesses of wireless signals it makes really hard to know what failed (specially when it’s the signal getting lost) and who is to blame. Nevertheless, carriers have to do their best to keep a good signal up and running; the wireless carrier that can achieve that will get the most wireless subscribers.

Even though I’m not a cell phone user, I like phone communications (wired or wireless) and even more when they work. It felt really cool when I had the chance to experience Nextel’s then just introduced walkie-talkie technology; I was able to connect/talk with a coworker in Monterrey while I was in Mexico City, instantaneously after the push of a button!!! (no dialing) If you know about the topographical shape that separates those two cities (you can check Google maps) and you know no satellite was involved, then you tend to believe telecommunication is magic.

What phone carriers do for us is already amazing.

Questo, que lotro.., salud!


Read Full Post »

The blog post “Admitting You Have a Problem Is the First Step, AT&T” by John Paczkowski from All Things Digital, regarding an AT&T’s iPhone application to track down network issues prompted me to elaborate and make some clarifications about wireless telecommunications.

[Poor] wireless coverage, network performance, and dropped calls, all are very different stories. Coverage is different from signal drop. If you look closely and attentively at the application interface, ‘dropped call’ and ‘no coverage’ are two different buttons. Quiz question: How do you determine which of these two buttons ‘dropped call’ or ‘no coverage’ to tap?

John, just want to clarify some fine points regarding “it is somewhat remarkable to learn that AT&T isn’t really sure of where, exactly, the holes in its network are.” I believe you have touched a very sensitive fiber and a non-trivial topic regarding wireless telecommunications.

No, it is not remarkable AT&T is not really sure where at. It is very difficult to know WHERE exactly those “holes” are IF the networking/communications devices are all working right (no holes), because the issue can be the wireless signal being lost. The [telecommunications] network is a huge huge collection of networking hardware and devices, and no doubt AT&T knows when and where the problem is if it is a networking device malfunction; but when it is because of something else (usually the signal itself), they do not look for a hole, they just need to try re-establish signal/service. I would say it is remarkable AT&T is addressing its network issues.

Yes, sometimes “network holes” might be networking/communications hardware malfunctions but that’s not always the case. Wireless signals can be brought down [and sometimes very easily] for a number of reasons… earth, wind and fire (kidding).

See, wireless electromagnetic signals (waves) “struggle” when travelling (air travel) through concrete walls (means mostly buildings), or even through strong winds. Needless to say that thunderstorms bring wireless communications, and sometimes wired, down; tell me, where in the network is the “hole” when the problem was the signal brought down?

Did you know that GPS devices do not work indoors? Have you seen most GPS devices are placed on top of the car dashboard and very close to the windshield? Do you know why? Because GPS device needs to view/receive the wireless signal from the satellite.

Do you remember some years back getting into an elevator would cause your cellular call dropped [dead]?

Believe it or not, on these days of so advanced technology, electromagnetic signals (digital or analog) although stronger they remain/retain pretty much the same weaknesses as before. The digital world/digital signaling (Sounds familiar? all things digital) made things a lot easier for carriers to keep signal up and running regardless of earth, wind, fire, shine, light rain, or [heavy] snow.

Put simple, assuming it is not a networking device malfunction (sim card, cell phone radio, antennas, relayers, routers, switches, etc. etc. etc.), when your cellular call is dropped the cause could have been the [series of] walls, the wind, the rain, the thunderstorm, the icestorm, or the [heavy] snow fall, very much in that order, and it is hard to know where the hole in the network is becuase there is not one.

Admitting we don’t know some [digital] things is a very good [first or second or… you pick] step most of the times.

Questo, que lotro…., salud!

Read Full Post »

Older Posts »