Wednesday, 26 October 2011

Define Cloud Computing.List it's advantages,disadvantages and give few example of cloud computing in PCs,Tablets and Phones.Which major operating system relies heavily on cloud computing.

Mobile Cloud Computing Changes App Development
Cloud computing enables users to store files and software remotely, rather than on a hard drive or server at their office. The fact is many people may already be using cloud computing without realizing it, whether through work or personal use. Examples include web-based email like Gmail and Hotmail, communication tools like Skype, video sites like YouTube and Vimeo and music-sharing sites such as SoundCloud.
Some examples of cloud computing applications include software as a service (SaaS), Customer Relationship Management, file storage, file synchronization and file back-up. It's now possible for businesses to have their own private cloud, which incorporates specific services and is only accessible to specific people.

Advantages of Cloud Computing
  1. Saves time. Businesses that utilize software programs for their management needs are disadvantaged, because of the time needed to get new programs to operate at functional levels. By turning to cloud computing, you avoid these hassles. You simply need access to a computer with Internet to view the information you need.
  2. Less glitches. Applications serviced through cloud computing require fewer versions. Upgrades are needed less frequently and are typically managed by data centers. Often, businesses experience problems with software because they are not designed to be used with similar applications. Departments cannot share data because they use different applications. Cloud computing enables users to integrate various types of applications including management systems, word processors, and e-mail. The fewer glitches, the more productivity expected from employees.
  3. Going green. On average, individual personal computers are only used at approximately 10 to 20 percent of their capacity. Similarly, computers are left idle for hours at a times soaking up energy. Pooling resources into a cloud consolidates energy use. Essentially, you save on costs by paying for what you use and extending the life of your PC.
  4. Fancy technology. Cloud computing offers customers more access to power. This power is not ordinarily accessible through a standard PC. Applications now use virtual power. Users can even build virtual assistants, which automate tasks such as ordering, managing dates, and offering reminders for upcoming meetings.
  5. Mobilization. From just about anywhere in the world, services that you need are available. Sales are conducted over the phone and leads are tracked by using a cell phone. Cloud computing opens users up to a whole new world of wireless devices, all of which can be used to access any applications. Companies are taking sales productivity to a whole new level, while at the same time, providing their sales representatives with high quality, professional devices to motivate them to do their jobs well.
  6. Consumer trends. Business practices that are most successful are the ones that reflect consumer trends. Currently, over 69 percent of Americans with internet access use a source of cloud computing. Whether it is Web e-mail, data storage, or software, this number continues to grow. Consumers are looking to conduct business with a modern approach.
  7. Social media. Social networking is all the wave of the future among entrepreneurs. Companies are using social networking sites such as Twitter, Facebook, and LinkedIn to heighten their productivity levels. Blogs are used to communicate with customers about improvements that need to be made within companies. LinkedIn is a popular website used by business professionals for collaboration purposes. Also, target groups are tracked more efficiently by tagging networks on Facebook. New advertising techniques are also being used on these sites, and businesses are seeing the advantages of assimilating to a more modern world.
  8. Customize. All to often, companies purchase the latest software in hopes that it will improve their sales. Sometimes, programs do not quite meet the needs of a company. Some businesses require a personalized touch, that ordinary software cannot provide. Cloud computing gives the  user the opportunity to build custom applications on a user-friendly interface. In a competitive world, your business needs to stand out from the rest. Customization is the solution for this problem.
  9. No need for hardware hiccups.
  10. IT staff cuts. When all the services you need are maintained by experts outside your business, there is not need to hire new ones.
Disadvantages Of Cloud Computing
While cloud computing and storage is a great innovation in the field of computing, However, there are certain things that you need to be cautious about too. Some may say that there are no down sides to cloud computing, but users should not depend too heavily on these services. Although you may find all you need with a particular service, you have to consider the security and portability it offers and also make contingencies should the service be terminated abruptly
Moreover, an online service is more prone to threats than your PC. Having said that, however, most would agree that with cloud computing, the good outweighs the bad.
The main disadvantages are Security and Privacy, Dependency (loss of control), Cost ,Decreased flexibility ,Knowledge  And Integration

1.Security & Privacy

The biggest concerns about cloud computing are security and privacy. Users might not be comfortable handing over their data to a third party. This is an even greater concern when it comes to companies that wish to keep their sensitive information on cloud servers. While most service vendors would ensure that their servers are kept free from viral infection and malware, it Is still a concern considering the fact that a number of users from around the world are accessing the server. Privacy is another issue with cloud servers. Ensuring that a client’s data is not accessed by any unauthorized users is of great importance for any cloud service. To make their servers more secure, cloud service vendors have developed password protected accounts, security servers through which all data being transferred must pass and data encryption techniques. After all, the success of a cloud service depends on its reputation, and any sign of a security breach would result in a loss of clients and business.

2.Dependency (loss of control):

  • Quality problems with CSP(Cloud Service Providers).No influence on maintenance levels and fix frequency when using cloud services from a CSP.
  • No or little insight in CSP contingency procedures. Especially backup, restore and disaster recovery.
  • No easy migration to an other CSP.
  • Measurement of resource usage and end user activities lies in the hands of the CSP
  • Tied to the financial health of another Company.

3.Cost

Higher costs. While in the long run, cloud hosting is a lot cheaper than traditional technologies, the fact that it’s currently new and has to be researched and improved actually makes it more expensive. Data centers have to buy or develop the software that’ll run the cloud, rewire the machines and fix unforeseen problems (which are always there). This makes their initial cloud offers more expensive. Like in all other industries, the first customers pay a higher price and have to deal with more issues than those who switch later (although it would be very hard to create and improve new technologies without these initial adopters).

4.Decreased flexibility

This is only a temporary problem (as the others on this list), but current technologies are still in the testing stages, so they don’t really offer the flexibility they promise. Of course, that’ll change in the future, but some of the current users might have to deal with the facts that their cloud server is difficult or impossible to upgrade without losing some data, for example.

5.Knowledge  And Integration.

Knowledge:
More and deeper knowledge is required for implementing and managing SLA contracts with CSP’s ,Since all knowledge about the working of the cloud (e.g. hardware, software, virtualization, deployment) is concentrated at the CSP, it is hard to get grip on the CSP.
Integration:
Integration with equipment hosted in other data centers is difficult to achieve. Peripherals integration. (Bulk)Printers and local security IT equipment (e.g. access systems) is difficult to integrate. But also (personal) USB devices or smart phones or groupware and email systems are difficult to integrate.

 Mobile Cloud Computing Changes App Development
With the advent of mobile cloud computing, increasing effort has been put into developing platforms that simplify the development of cloud-based mobile applications. Creating apps for the mobile cloud is significantly different than developing apps for a native smartphone platform like the iPhone or Android. But over the long run, the mobile cloud computing model may prove more profitable for app developers, and open the field to a larger number of developers.
Current mobile development platforms
In case of the current native platforms, developers need to be knowledgeable about the platform-centric APIs and development tools provided by the platform vendors, in this case Apple and Google. Objective- C is the main development language being used for writing iPhone apps that are being offered in Apple‘s App Store. This may not be difficult for experienced C programmers, but Objective-C requires significant programming capabilities and may pose a steep learning curve for newcomers. Android developers use Java, C or C++ as the main programming languages for app development.
Both the iPhone and Android platforms provide development tools for making app development process easier and coherent. Still, these are high-level programming languages that relatively few master when compared to the vast number of skilled Web developers that focus on using web-centric technologies and standards like HTML, CSS, JavaScript and XML.

Web based mobile application development
Several industry analysts predict that mobile applications will gradually move to the cloud, and move away from being installed and run directly from the handsets themselves. Instead, apps will be accessed and executed directly from the cloud through a mobile web browser interface. Several technologies facilitating this change are already available. HTML5, for example, is necessary for enabling caching on the handset, so that users will experience uninterrupted service levels despite fluctuations in network service delivery. 4G mobile networks, like LTE and WiMAX, are fundamental for supporting large-scale mobile cloud deployment. These networks are already being deployed in several cities and small regions and are expected to obtain significant adoption rates in the coming years.
Enabling mobile technologies
A few mobile solutions providers, such as appMobi, have started to offer integrated mobile browsers that allow users to access apps directly from the websites of their publishers, thereby eliminating the need to go to Apple App Store or Android Market. This also means that app developers and publishers don’t need to go through complicated, and sometimes costly, submission processes, unexpected rejection of their submissions and the required profit sharing with the third-party app stores.
From the perspective of developing mobile apps, using standard web languages and standards like HTML/HTML5, CSS and JavaScript enables cross-platform functionality and removes the limitations of native app development. A much larger segment of developers can start creating mobile apps using the same tools they are already accustomed to such as DreamWeaverEclipse or Visual Studio. Another benefit includes software upgrades – as there is no need any more for upgrading apps on the handsets themselves.
Some other interesting mobile apps solution providers include FeedHenry andRhoMobile. These offer cloud-based smartphone frameworks that allow developers to create cross-platform mobile apps using traditional web technologies. With no hardware or software to install, it seems inevitably a very interesting choice for web developers and enterprises that quickly want to start creating and deploying new mobile apps.

Linux: Designed for the Cloud
Linux is the natural technology for enabling cloud computing: it's modular, it's performant, it's power efficient, it scales, it's open source, and it's ubiquitous. And, as the platform upon which the largest cloud infrastructures, in the world have been built, Linux - unlike other available operating systems - has little left to prove as a component of cloud infrastructures be they public or private. "Every time you use Google, you're using a machine running the Linux kernel," as Google's Chris DiBona has said. [1]
Architecture
The Linux kernel supports a degree of componentization that is unmatched amongst general purpose operating systems. Configurable such that it may power everything from a handset to a supercomputer, the Linux kernel is remarkably adaptable to computing environments of all shapes and sizes. "Linux today supports more hardware devices than any other operating system in the history of the world." [2] This is of particular value in highly customized, scale-out cloud platforms, which are required to run on a heterogeneous collection of commodity hardware, networking and storage gear. Beyond the basic compatibility with the mixed nature of the environment, cloud providers will often take advantage of the ability to modify the Linux source code in order to tune and customize the kernel to their specific needs and hardware.
Compatibility
Linux has an extensive application and ISV ecosystem. With thousands and thousands of Linux compatible and certified applications available, users have many options for their specific workload's needs. Customers leveraging Linux for their local and data center needs, then, will be able to extend this advantage to their cloud-based deployments.
For platform providers, Linux is the logical choice. Like the web architectures it spawned from, cloud computing platforms are often composed from many other open source projects, from databases to file systems to application and web servers to language runtimes. By virtue of its quality, ubiquity, and open source nature, Linux is a first choice deployment target for developers of all of the above. As a result, cloud vendors benefit from the wide application catalog available to the Linux platform.
Whatever your role, choosing Linux means guaranteeing your application choice.
Cost-Licensing
There exists in some quarters the misconception that Linux is always free inthe financial sense of the word. In reality, the overwhelming majority of enterprise and governmental production deployments are commercially licensed and supported. For cloud platform providers, however, the option to run non-commercial distributions does exist, and may be compelling. Platform providers will choose this path because creating a cloud infrastructure composed of thousands or tens of thousands of licensed nodes would be uneconomical with traditional per-server or socket models. By leveraging this lower cost approach, cloud providers are able to pass on the savings to customers.
Cost-Power
Besides its advantages in licensing, Linux is a more cost effective platform for providers to deploy and customers to target. Partially because of its usage in small, power sensitive devices, Linux has been the beneficiary of a great deal of research in lowering total power consumption. Heavy attention has been paid, for example, to making Linux more power efficient relative to competitors, via projects like the tickless kernel. Combined with the power saving efforts within cloud data centers, Linux is helping to lower the total solution cost for cloud customers.
Manageability & Staffing
For enterprises and governments alike, questions of resourcing and personnel are an important factor in technology deployment and purchase. In addition to evaluating the merits of a given product or project, organizations must consider how their existing skillsets map to the technologies in question, and further, the ability to hire those skills from the general market in the future. Fortunately, because managing and developing for Linux are common skills, the ubiquity of Linux within cloud platforms means that customers deploying to the cloud can avoid costly re-training for system administrators and developers. In addition to re-purposing existing personnel, deployed IT management systems that already target Linux can be better leveraged with regard to Linux-based cloud nodes.
Standards
One of the most common concerns that analysts and other advisers have for potential cloud customers is the lack of standards, and the resulting potential for lock-in. For all of the advantages in deployment speed and flexibility, the nascent stage of many cloud offerings and the absence of common, agreed upon formats for packaging, runtimes, and virtual images introduces risk. Fortunately, customers can leverage Linux as a hedge against this possibility. The differences between Linux instances, hosted in cloud environments and those hosted locally or at a data center, after all, are generally less technical than geographical. By standardizing on Linux workloads, customers will have the flexibility to deploy locally or remotely as the economics and circumstances dictate.
Virtualization
Virtualization, a mainstream technology in most data centers and enterprises, is an important enabler of most cloud platforms. In simple terms, virtualization involves the ability to abstract operating system or application instances from the underlying platform. Windows images or applications, for instance, may be hosted and run on top of a Linux platform using these technologies.
Available to Linux users are a diverse array of virtualization technologies, from the hypervisors that make virtualization possible to the management tooling that allows the virtualized resources to be efficiently marshaled and applied. Equally capable of serving as a host for virtualized instances or as a guest itself, Linux is a stable, secure virtualization option.
Coupled with so-called "live migration" functionality, virtualization can and will also be an important bridge from local environments to cloud based hardware. Linux is therefore an optimal cloud platform, as it is equally adept at playing the role of the host operating system, via technologies like KVM or Xen, or the guest

Linux is the Cloud's Past
The dot-com era, besides being famous for its irrational exuberance, witnessed the first tentative steps towards cloud computing. Driven by the need to contain costs, many web startups eschewed more traditional scale-up architectures of fewer, more powerful servers and mainframes in favor of massively scaled-out architectures composed of commodity hardware running the Linux kernel. Their success has meant the popularity of the scale-out architectural style more broadly.
In a very real sense, Linux was the catalyst for a new architectural approach and a new generation of web oriented businesses. It is difficult, in fact, to imagine the cloud arriving without a readily available open source kernel like Linux. Few of the original online startups could have risen to their current prominence without it, as both the technology and the economics of alternatives such as Windows would be prohibitive. Linux enabled the generation of an entirely new class of businesses, serving as the technical foundation for successful startups and those that have followed in their footsteps.
Linux is the Cloud's Present
The dominance of Linux within the current crop of cloud computing vendors is eye opening. Virtually every cloud player of any significance features Linux in either primary or supporting capacities, and this adoption is accelerating. Google's recently launched App Engine and Amazon's competitive EC2 product both leverage the Linux kernel, as do cloud offerings from vendors such as 10gen, 3Tera, Media Temple, Mosso, and Zimory. Different providers choose to take different approaches to their products, with some choosing to expose the underlying operating environment (e.g. EC2) and some abstracting it (e.g., App Engine).
One such example of explicit cloud offerings includes commercial Linux distributor Red Hat's partnership with Amazon to offer Red Hat Enterprise Linux, JBoss Enterprise Application Platform, and Red Hat Enterprise MRG Grid & Amazon EC2 Execute Node on EC2. Other Linux distributions that work on the EC2 platform include Oracle Enterprise Linux, SUSE Enterprise Server, and--most recently--Ubuntu Server 9.10.
Linux is still present as the underlying platform in IBM's partnership with Amazon, where IBM's DB2 Express-C 9.5, Informix Dynamic Server Developer Edition 11.5, WebSphere Portal Server and Lotus Web Content Management Standard Edition; and WebSphere sMash are the products customers can use in the EC2 cloud.
But whether it's an implicit or explicit role, cautious or cutting-edge, Linux is playing a major part in the overwhelming majority of cloud environments.
The fact is that Linux is already the de facto operating system of choice for cloud computing.
Linux is the Cloud's Future
Linux was the core component powering the first generation of web businesses. These businesses could not have been built without a low-cost, flexible software solution as the foundation. Windows, meanwhile, was not a major player for reasons ranging from licensing costs to technology limitations.
The second wave of online business, typified by Google and Amazon, will move farther into consumers' digital lives, running in multiple devices, handling off-line interruptions, improving the browser interface, facilitating mashups between diverse user-chosen services, and a myriad of other issues that are just starting to be glimpsed. This flexibility and utility, based on Linux, is now pushing the cloud into enterprises, governments, and small businesses the world over.
Having proven its worth in high scale, high demand environments, Linux is today being chosen time and again by cloud providers and their customers. Where the likes of Amazon and Google once benefited from the economics that open source and Linux afforded them, their cloud customers will now benefit from the economies of scale that the large providers can leverage. Economies of scale enabled both by Linux and the cloud.
Linux is nothing less than the foundation upon which cloud platforms will be built going forward.

What is pipelining? Can it be used with parallel processing and multi-core processing ?


(n.) (1) A technique used in advanced microprocessors where the microprocessor begins executing a second instruction before the first has been completed. That is, several instructions are in the pipelinesimultaneously, each at a different processing stage.
The pipeline is divided into segments and each segment can execute its operation concurrently with the other segments. When a segment completes an operation, it passes the result to the next segment in the pipeline and fetches the next operation from the preceding segment. The final results of each instruction emerge at the end of the pipeline in rapid succession.
Although formerly a feature only of high-performance and RISC -based microprocessors, pipelining is now common in microprocessors used inpersonal computers. Intel's Pentium chip, for example, uses pipelining to execute as many as six instructions simultaneously.
Pipelining is also called pipeline processing.
(2) A similar technique used in DRAM, in which the memory loads the requested memory contents into a small cache composed of SRAM and then immediately begins fetching the next memory contents. This creates a two-stage pipeline, where data is read from or written to SRAM in one stage, and data is read from or written to memory in the other stage.
DRAM pipelining is usually combined with another performance technique called burst mode. The two techniques together are called a pipeline burst cache.
In today’s world of multicore processors and multithreaded applications, programmers need to think constantly about how to best harness the power of cutting-edge CPUs when developing their applications. Although structuring parallel code in traditional text-based languages can be difficult both to program and visualize, graphical development environments such as National Instruments LabVIEW are increasingly allowing engineers and scientists to cut their development times and quickly implement their ideas.
Because NI LabVIEW is inherently parallel (based on dataflow), programming multithreaded applications is typically a very simple task. Independent tasks on the block diagram automatically execute in parallel with no extra work required from the programmer. But what about pieces of code that are not independent? When implementing inherently serial applications, what can be done to harness the power of multicore CPUs?

Introduction to Pipelining

One widely accepted technique for improving the performance of serial software tasks is pipelining. Simply put, pipelining is the process of dividing a serial task into concrete stages that can be executed in assembly-line fashion.
Consider the following example: suppose you are manufacturing cars on an automated assembly line. Your end task is building a complete car, but you can separate this into three concrete stages: building the frame, putting the parts inside (such as the engine), and painting the car when finished.
Assume that the building the frame, installing parts, and painting take one hour each. Therefore, if you built just one car at a time each car would take three hours to complete (see Figure 1 below).
Figure 1. In this example (non-pipelined), building a car takes 3 hours to complete.
How can this process be improved? What if we set up one station for frame building, another for part installation, and a third for painting. Now, while one car is being painted, a second car can have parts installed, and a third car can be under frame construction.

How Pipelining Improves Performance

Although each car still takes three hours to finish using our new process, we can now produce one car each hour rather than one every three hours – a 3x improvement in throughput of the car manufacturing process. Note that this example has been simplified for demonstration purposes; see the Important Concerns section below for additional details on pipelining.
Figure 2. Pipelining can greatly increase the throughput of your application.

Basic Pipelining in LabVIEW

The same pipelining concept as visualized in the car example can be applied to any LabVIEW application in which you are executing a serial task. Essentially, you can use LabVIEW shift registers and feedback nodes to make an “assembly line” out of any given program. The following conceptual illustration shows how a sample pipelined application might run on several CPU cores:
Figure 3. Timing diagram for a pipelined application running on several CPU cores.

Important Concerns

When creating real world multicore applications using pipelining, a programmer must take several important concerns into account. In specific, balancing pipeline stages and minimizing memory transfer between cores are critical to realizing performance gains with pipelining.

Balancing Stages

In both the car manufacturing and LabVIEW examples above, each pipeline stage was assumed to take an equal amount of time to execute; we can say that these example pipeline stages werebalanced. However, in real-world applications this is rarely the case. Consider the diagram below; if Stage 1 takes three times as long to execute as Stage 2, then pipelining the two stages produces only a minimal performance increase.
Non-Pipelined (total time = 4s):

Pipelined (total time = 3s):
Note: Performance increase = 1.33X (not an ideal case for pipelining)
To remedy this situation, the programmer must move tasks from Stage 1 to Stage 2 until both stages take approximately equal times to execute. With a large number of pipeline stages, this can be a difficult task.
In LabVIEW, it is helpful to benchmark each of your pipeline stages to ensure that the pipeline is well balanced. This can most easily be done using a flat sequence structure in conjunction with the Tick Count (ms) function as shown in Figure 4.
Figure 4. Benchmark your pipeline stages to ensure a well balanced pipeline.

Data Transfer Between Cores

It is best to avoid transferring large amounts of data between pipeline stages whenever possible. Since the stages of a given pipeline could be running on separate processor cores, any data transfer between individual stages could actually result in a memory transfer between physical processor cores. In the case that two processor cores do not share a cache (or the memory transfer size exceeds the cache size), the end application user may see a decrease in pipelining effectiveness.

Conclusion

To summarize, pipelining is a technique that programmers can use to gain a performance increase in inherently serial applications (on multicore machines). The CPU industry trend of increasing cores per chip means that strategies such as pipelining will become essential to application development in the near future.
In order to gain the most performance increase possible from pipelining, individual stages must be carefully balanced so that no single stage takes a much longer time to complete than other stages. In addition, any data transfer between pipeline stages should be minimized to avoid decreased performance due to memory access from multiple cores.

What is threading in multi-tasking? Does it make the processing folder or slower ?


A multicore system is a single-processor CPU that contains two or more cores, with each core housing independent microprocessors. A multicore microprocessor performs multiprocessing in a single physical package. Multicore systems share computing resources that are often duplicated in multiprocessor systems, such as the L2 cache and front-side bus.
Multicore systems provide performance that is similar to multiprocessor systems but often at a significantly lower cost because a motherboard with support for multiple processors, such as multiple processor sockets, is not required.

Multitasking

In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. With a multitasking OS, such as Windows XP, you can simultaneously run multiple applications. Multitasking refers to the ability of the OS to quickly switch between each computing task to give the impression the different applications are executing multiple actions simultaneously.
As CPU clock speeds have increased steadily over time, not only do applications run faster, but OSs can switch between applications more quickly. This provides better overall performance. Many actions can happen at once on a computer, and individual applications can run faster.
In the case of a computer with a single CPU core, only one task runs at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves this problem by scheduling which task may run at any given time and when another waiting task gets a turn.


When running on a multicore system, multitasking OSs can truly execute multiple tasks concurrently. The multiple computing engines work independently on different tasks.
For example, on a dual-core system, four applications - such as word processing, e-mail, Web browsing, and antivirus software - can each access a separate processor core at the same time. You can multitask by checking e-mail and typing a letter simultaneously, thus improving overall performance for applications.

Figure 2. Dual-core systems enable multitasking operating systems to execute two tasks simultaneously

The OS executes multiple applications more efficiently by splitting the different applications, or processes, between the separate CPU cores. The computer can spread the work - each core is managing and switching through half as many applications as before - and deliver better overall throughput and performance. In effect, the applications are running in parallel.

Multithreading

Multithreading extends the idea of multitasking into applications, so you can subdivide specific operations within a single application into individual threads. Each of the threads can run in parallel. The OS divides processing time not only among different applications, but also among each thread within an application.
In a multithreaded National Instruments LabVIEW program, an example application might be divided into four threads - a user interface thread, a data acquisition thread, network communication, and a logging thread. You can prioritize each of these so that they operate independently. Thus, in multithreaded applications, multiple tasks can progress in parallel with other applications that are running on the system.
Figure 3. Dual-core system enables multithreading
Applications that take advantage of multithreading have numerous benefits, including the following:
  • More efficient CPU use
  • Better system reliability
  • Improved performance on multiprocessor computers
In many applications, you make synchronous calls to resources, such as instruments. These instrument calls often take a long time to complete. In a single-threaded application, a synchronous call effectively blocks, or prevents, any other task within the application from executing until the operation completes. Multithreading prevents this blocking.
While the synchronous call runs on one thread, other parts of the program that do not depend on this call run on different threads. Execution of the application progresses instead of stalling until the synchronous call completes. In this way, a multithreaded application maximizes the efficiency of the CPU because it does not idle if any thread of the application is ready to run.




Multithreading with LabVIEW

LabVIEW automatically divides each application into multiple execution threads. The complex tasks of thread management are transparently built into the LabVIEW execution system.

Figure 4. LabVIEW uses multiple execution threads

M

Compare Multi-core processing to parallel processing in terries of speed number of tasks you can process at a time .


multi-core processor is a single computing component with two or more independent actual processors (called "cores"), which are the units that read and execute program instructions.[1] The data in the instruction tells the processor what to do. The instructions are very basic things like reading data from memory or sending data to the user display, but they are processed so rapidly that human perception experiences the results as the smooth operation of a program. Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package.
Processors were originally developed with only one core. A many-core processor is a multi-core processor in which the number of cores is large enough that traditional multi-processor techniques are no longer efficient[citation needed] — largely because of issues with congestion in supplying instructions and data to the many processors. The many-core threshold is roughly in the range of several tens of cores; above this threshold network on chip technology is advantageous.Tilera processors feature a switch in each core to route data through an on-chip mesh network to lessen the data congestion, enabling their core count to scale up to 100 cores.
dual-core processor has two cores (e.g. AMD Phenom II X2, Intel Core Duo), a quad-core processor contains four cores (e.g. AMD Phenom II X4, the Intel 2010 core line that includes three levels of quad-core processors, see i3i5, and i7 at Intel Core), and a hexa-core processor contains six cores (e.g. AMD Phenom II X6, Intel Core i7 Extreme Edition 980X). A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared memory inter-core communication methods. Common network topologies to interconnect cores include bus, ring, two-dimensional mesh, and crossbarHomogeneous multi-core systems include only identical cores, heterogeneous multi-core systems have cores which are not identical. Just as with single-processor systems, cores in multi-core systems may implement architectures such as superscalarVLIWvector processingSIMD, or multithreading.
Multi-core processors are widely used across many application domains including general-purpose, embeddednetworkdigital signal processing (DSP), and graphics.
The improvement in performance gained by the use of a multi-core processor depends very much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can be parallelized to run on multiple cores simultaneously; this effect is described by Amdahl's law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or even more if the problem is split up enough to fit within each core's cache(s), avoiding use of much slower main system memory. Most applications, however, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem[2]. The parallelization of software is a significant ongoing topic of research.
     


The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock-rate than is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher-quality signals allow more data to be sent in a given time period, since individual signals can be shorter and do not need to be repeated as often.
The largest boost in performance will likely be noticed in improved response-time while running CPU-intensive processes, like antivirus scans, ripping/burning media (requiring file conversion), or file searching. For example, if the automatic virus-scan runs while a movie is being watched, the application running the movie is far less likely to be starved of processor power, as the antivirus program will be assigned to a different processor core than the one running the movie playback.
Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less printed circuit board (PCB) space than do multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core-design. Also, adding more cache suffers from diminishing returns.[citation needed]
Multi-core chips also allow higher performance at lower energy. This can be a big factor in mobile devices that operate on batteries. Since each core in multi-core is generally more energy-efficient, the chip becomes more efficient than having a single large monolithic core. This allows to get higher performance with less energy. The challenge of writing parallel code clearly offsets this benefit.[4]

3.Explain how does Bus speed effect the performance of the computer


Bus Speed

Busses connect different parts of the motherboard to one another
A bus is simply a circuit that connects one part of the motherboard to another. The more data a bus can handle at one time, the faster it allows information to travel. The speed of the bus, measured in megahertz (MHz), refers to how much data can move across the bus simultaneously.
Bus speed usually refers to the speed of the front side bus (FSB), which connects the CPU to the northbridge. FSB speeds can range from 66 MHz to over 800 MHz. Since the CPU reaches the memory controller though the northbridge, FSB speed can dramatically affect a computer's performance.
Here are some of the other busses found on a motherboard:
  • The back side bus connects the CPU with the level 2 (L2) cache, also known as secondary or external cache. The processor determines the speed of the back side bus.
  • The memory bus connects the northbridge to the memory.
  • The IDE or ATA bus connects the southbridge to the disk drives.
  • The AGP bus connects the video card to the memory and the CPU. The speed of the AGP bus is usually 66 MHz.
  • The PCI bus connects PCI slots to the southbridge. On most systems, the speed of the PCI bus is 33 MHz. Also compatible with PCI is PCI Express, which is much faster than PCI but is still compatible with current software and operating systems. PCI Express is likely to replace both PCI and AGP busses.
The faster a computer's bus speed, the faster it will operate -- to a point. A fast bus speed cannot make up for a slow processor or chipset.
Now let's look at memory and how it affects the motherboard's speed.