Aside from the energy requirements awareness/limitations (does any Java programmer really try to be aware of them?) and the obvious lesser speed and memory limitations (does anyone try to program inefficient code just b/c a PC is more powerful?)

What are the differences between PC and mobile program development?

And also, financially: I heard and read that there is considered to be more financial potencial in programming for mobile. However hacking/cracking/copying code/programs there is still available to many. So why is there any more potential in programming for mobile if it is true at all?

Also, since it is mostly/usually done in Java, is there a difference between Java for PC and Java for mobile? is Java not a portable language?

Recommended Answers

All 6 Replies

Java is Java. Mobile, PC, server - it is all pretty much the same code. Android uses a version of Java called Dalvik - it is Java code, but with a different compiler and byte-code interpreter (virtual machine). That said, how you approach application development in each such environment is very different. Mobile devices are much more resource constrained (memory, CPU, storage, etc). You need to be more aware of these issues and operate accordingly. I am a senior engineer for a major mobile phone manufacturer and we write Java code for all the above. Some phones (of the "smart" variety) are pretty well endowed with CPU and RAM, but a lot that we sell are not so (so-called "feature" phones). We write software that has to run on all of our devices, and that is a definite challenge!

These days, most mobile applications that 3rd parties sell are what we call "webapps", using the WebApp API's. That helps insulate them from such issues for the most part. In our case, those applications actually run on our server farms (thousands of high-end Linux servers running all over the world), and only the UI runs on the phone (input and output). Even the rendering of the display occurs on our servers, and only the input/output are handled by the phone. This is becoming more and more a common pattern for mobile phone applications. Tablets are more capable, so this is less an issue for them. In any case, because the actual application code runs on a server with 64-128GB of RAM, and 8-12+ high speed cores, and multi-gigabit internet connections, the limits of the phone are not really an issue! :-) Not many people program directly to the phone any longer. That is generally left to the phone vendor who has to write the support code that the user and applications can utilize.

commented: thank you for teaching me this +0

Thank you, @rubberman, I did not know it, regarding the part about the outsorcing of computations to the server side.
But regarding your and @steev 's explanation about Java and the differences in computational capabilities, it was a mere repeating of my opening post.
Are you just trying to program in Java as efficient as possible? but who does not?
Or is it you testing the real time performance of your Java code and is it unlike what PC programmers do?

Many of the issues involve garbage collection, and its impact upon performance. There are techniques (too involved to get into here) that can be used to improve its efficiency. In fact, there are real-time java virtual machines that use reference-counting GC's vs. the default mark-and-sweep variety; however, most mobile devices use standard JVM's. As a result, this can become an issue unless you employ such techniques as pre-allocating data buffers, store them in pools, and use those for application objects as necessary, reducing the amount of GC required at run time. These techniques are generally NOT taught in most Java programming courses. C++ has similar issues, and so some classes (such as string classes) will implement their own allocators and assign pre-allocated buffers from a pool to class instances as necessary, and return those buffers back to to the pool when they are no longer needed. FWIW, string classes in both Java and C++ often use reference-counting to determine when to delete the underlying string buffer, and copy-on-write (COW) algorithms to determine if a buffer needs to be physically, vs just the pointer being assigned to a new object.

FYI, I own both the first and second editions of the JVM specification books (and have studied both to a great extent), and have done extensive research into garbage-collection algorithms, having implemented a C++ reference counting GC that has allowed major manufacturing systems comprised of 10M+ lines of C++ code to operate without memory leaks and NO application-level delete calls whatsoever. One of the major issues was how to deal with recursive (self-referencing) class structures. That was a difficult issue to resolve... :-)

commented: interesting and useful post +0

Very good post, rubberman ,very important for me as well and I am glad to have it here, but what is its relevance to the comparison between smartphones / "mobiles" and "desktops"?
Those smart/improved GCs, etc., can be used and improve immobile PCs just as well, am I wrong in this?
Seems just like an important issue which is not particular to a certain platform/hardware over the other.
So I just have to wonder again: what/where is the difference between programming for each 1 of these platforms/hardwares?

The difference in the immortal words of The Bard (Shakespeare) is "It depends". I consider phones to be the primary "mobile" platform of the day, though tablets and such are gaining traction in the mobile domain. Most phones and tablets utilize ARM processors, whereas desktops (including desktop replacement laptops) utilize Intel processors. Oracle has just announced that they are developing an ARM optimized JVM for mobile use. What they do to optimize it is still up for grabs. In any case, if you are developing an application for both mobile and desktop use, you may not do much different. In fact, theory has it that you really don't need to do anything. Experience has taught me that is not the case, however; although the manual optimizations you do to enhance performance and user experience for mobile devices will also apply to desktop versions as well.

Sorry, but that is about as specific as I can get without specifics from you on what you are trying to accomplish.

The difference in the immortal words of The Bard (Shakespeare) is "It depends". I consider phones to be the primary "mobile" platform of the day, though tablets and such are gaining traction in the mobile domain. Most phones and tablets utilize ARM processors, whereas desktops (including desktop replacement laptops) utilize Intel processors. Oracle has just announced that they are developing an ARM optimized JVM for mobile use. What they do to optimize it is still up for grabs. In any case, if you are developing an application for both mobile and desktop use, you may not do much different. In fact, theory has it that you really don't need to do anything. Experience has taught me that is not the case, however; although the manual optimizations you do to enhance performance and user experience for mobile devices will also apply to desktop versions as well.

Sorry, but that is about as specific as I can get without specifics from you on what you are trying to accomplish.

So what did experience teach you about the difference? I am only trying to understand something that I failed to understand so far, for the reasons I explained already in my replies to you.
I would also like to know how to program for mobiles of course, as well as other things.
Everything you can post about these subjects is interesting to me.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.