For the longest time you were forced to spend on most powerful CPUs to get better performance. Now the GPU (Graphics Processor Unit) found on your graphics card, can now offload that work.
Four years ago Nvidia developed a programming environment called CUDA (Compute Unified Device Architecture) through which some program processes can be run on the graphics chip. Only Nvidia chips starting from GeForce 8000 series onwards supported this.
AMD also supports the general standard OpenCL, pioneered by a company called the khronos group (which Nvidia also supports) using which you can share your programs workloads on OpenCLs compatible processors (CPU and GPU).
Even Microsoft approved of this development, equipping the new DirectX 11 instructions set with a new interface (Direct Compute) using which you can run program processes on the GPU.
In an operation such as counting the no. of times a particular word in a book, CPU starts from page 1, go through the text word by word and ends at the last page but the GPU divides the book into many small parts, distributes them to all its stream cores, and then simply count the appearance of the word in a fraction of time.
Based on the GPU core design, a software must be divided into 240 parts ( 240 threads) to be able to use 240 stream cores as in GeForce GTX 295 but it is not as easy since many software programs cannot be parallelized or it is extremely difficult to do so, now even the current CPUs experience the same problem for dividing into 8 threads to use 8 virtual cores as in Core i7.
The actual real-world processes that best use this capability are found in video and scientific editing work where there are no book pages but instead repeated addition and multiplications of floating point numbers in big matrices that are carried out for thousands of numbers for the exact same operation.
In future, In order to run a software program at a lightning speed, each and every program line must be divided into several threads while creating and every processing step does not depends on the result of the previous process and thus it makes the program a complete parellelized one.
So the processing speed not only depends on the hardware but also the compatible software.
Difference between GSM GPRS EDGE 3G WCDMA and HSDPA
Difference between GSM, GPRS, EDGE, 3G, WCDMA and HSDPA
Its a very basic and non-technical comparison.GSM
GSM, stands for Global Systems for Mobile Communications, is basic standard bearer of 2G technologies. It is mainly used in mobile communication. Short Messaging System (SMS) was introduced into GSM networks along with capability to download content from various service providers. The content could ring tone, logos and picture messages.
It can support Voice telephony and Data however the Data rate is only 9.6Kb/s, that is very low bit rate for date communication.GPRS
GPRS, stands for General Packet Radio Service, is used to give higher data speed over GSM. It is not the replacement of GSM. It is just a extension to the older GSM technology to gain faster speed.
Multimedia Messaging System or MMS is the feature of GPRS. It allowed subscribers to send videos, pictures, or sound clips to each other just like text messages. GPRS also provided mobile handset the ability to surf the Internet at dial-up speeds through WAP enabled sites.
GPRS offered higer bit rate ( Up to 171kb/s) by usage of A packet-linked technology over GSM.
EDGE
EDGE stands for Enhanced Data Rates for GSM Evolution. This technology, also termed as Enhanced GPRS.
This is a technology that uses the same equipment as GSM with only a few minor modifications to provide faster data speeds and is often regarded as a stepping stone towards 3G thus it is called 2.5G.
EDGE is a digital mobile phone technology but GPRS is a mobile data service.
It is a 3G Radio technology and GPRS or General Packet Radio Service is essentially packet oriented.
3G
The introduction of 3G changed a lot of the accepted standards in the mobile phone industry. It allows the use of a greater bandwidth that allows more features to be implemented on it.
3G gives many features like video calls and TV applications because of the speed of 3G which began at 384kbps; well within DSL speeds. Further development on 3G technologies have also created even faster data rate reaching 3.6 and even 7.2Mbps.
Mobile phone Users are also required to switch mobile phones in order to take advantage of the new features of 3G.
WCDMA
3G Networks are based on WCDMA i.e. Wideband Code Division Multiple Access, a mobile technology that improves upon the capabilities of current GSM networks.
HSDPA
HSDPA (High Speed Downlink Packet Access) is what is also known as 3.5G, as it offers no substantial upgrade to the feature set of WCDMA, but improves the speed of data transmission to enhance those services. WCDMA networks provides max 384kbps speed whileHSDPA allowed speeds above 384kbps, the most notable of which is 3.6Mbps and 7.2Mbps.
HSDPA has lower latency times and Fast Packet Scheduling compared to WCDMA.
Sleep mode is a power-saving state that is similar to pausing a DVD movie. All actions on the computer are stopped and any open documents and applications are put in memory. You can quickly resume normal, full-power operation within a few seconds. Sleep mode is basically the same thing as Standby mode.
The Sleep mode is useful if you want to stop working for a short period of time. The computer doesnt use much power in Sleep mode.
Hibernate
The Hibernate mode saves your open documents and running applications to your hard disk and shuts down the computer, which means once your computer is in Hibernate mode, it uses zero power. Once the computer is powered back on, it will resume everything where you left off.
Use this mode if you wont be using the laptop for an extended period of time, and you dont want to close your documents.
Does Bitrate Really Make a Difference In My Music?
What Is Bitrate?
Youve probably heard the term "bitrate" before, and you probably have a general idea of what it means, but just as a refresher, its probably a good idea to get acquainted with its official definition so you know how all this stuff works. Bitrate refers to the number of bitsor the amount of datathat are processed over a certain amount of time. In audio, this usually means kilobits per second. For example, the music you buy on iTunes is 256 kilobits per second, meaning there are 256 kilobits of data stored in every second of a song.
The higher the bitrate of a track, the more space it will take up on your computer. Generally, an audio CD will actually take up quite a bit of space, which is why its become common practice to compress those files down so you can fit more on your hard drive (or iPod, or Dropbox, or whatever). It is here where the argument over "lossless" and "lossy" audio comes in.
Lossless and Lossy Formats
When we say "lossless", we mean that we havent really altered the original file. That is, weve ripped a track from a CD to our hard drive, but havent compressed it to the point where weve lost any data. It is, for all intents and purposes, the same as the original CD track.
More often than not, however, you probably rip your music as "lossy". That is, youve taken a CD, ripped it to your hard drive, and compressed the tracks down so they dont take up as much space. A typicalMP3 or AACalbum probably takes up 100MB or so. That same album in lossless format, thoughsuch as FLAC or ALAC (also known as Apple Lossless) would take up closer to 300MB, so its become common practice to use lossy formats for faster downloading and more hard drive savings.
The problem is that when you compress a file to save space, youre deleting chunks of data. Just like when you take a PNG screenshot of your computer screen, and compress it to a JPEG, your computer is taking the original data and "cheating" on certain parts of the image, making itmostlythe same but with some loss of clarity and quality. Take the two images below as an example: the one on the right has clearly been compressed, and its quality has diminished as a result. (Youll probably want to expand the image for a closer look to see the differenceslook at the foxs ears and nose).
Remember, of course, that youre still reaping the benefits of hard drive space with lossy music (which can make a big difference on a 32 GB iPhone), its just the tradeoff you make. There are different levels of lossiness, as well: 128kbps, for example, takes up very little space, but will also be lower quality than a larger 320kbps file, which is lower quality than an even larger 1,411 kbps file (which is considered lossless). However, theres a lot of argument as to whether most people can evenhearthe difference between different bitrates.
Does It Really Matter?
Since storage has become so cheap, listening to higher-bitrate audio is starting to become a more popular (and practical) practice. But is it worth the time, effort, and space? I always hate answering questions this way, but unfortunately the answer is: it depends.
Part of the equation is the gear you use. If youre using aquality pair of headphonesor speakers, youre privy to a large range of sound. As such, youre more likely to notice certain imperfections that come with compressing music into lower bitrate files. You may notice that a certain level of detail is missing in low-quality MP3s; subtle background tracks might be more difficult to hear, the highs and lows wont be as dynamic, or you might just plain hear a bit of distortion. In these cases, you might want to get a higher bitrate track.
Difference between x86 32 bit or x64 64 bit architecture
Difference Between 64 and 32 bit processors:
HISTORY:
Both 32-bit and 64-bit architecture setups have been around for decades, but were mostly used in complicated enterprise computers like the IBM 7030 Stretch, built in 1961. 32-bit architecture was available to consumers in the 1980s, and the Intel 386 was one example.
ARCHITECTURE:
A 32 bit processor can represent numbers from 0 to 4,294,967,295 (32-bits wide) while a 64-bit processor can represent numbers from 0 to 18,446,744,073,709,551,615 (64-bits wide). Obviously this means your computer can do math with larger numbers, and be more efficient with smaller numbers. A 64-bit processor is made with more advanced silicon processes, have more transistors, and faster speeds. This is currently where the true benefit of switching to a 64-bit processor lays.
Difference Between 32-bit and 64-bit OS:
In order to create a platform that is capable of running in 32 bit processors, 32 bit based operating systems and softwares were developed and similarly for 64-bit processors, 64-bit based operating systems were developed.
However, Windows did not become a 32-bit operating system until Windows 95. Windows XP was the first consumer version of Windows to receive a 32-bit edition and now in 64-bit edition.
RAM:
A 32-bit version of windows can access upto 4GB of RAM when paired with a 32-bit or 64-bit processor.
A 64-bit version of windows can access larger than 128GB of RAM when paired with a 64-bit processor.
File size:
A 32-bit version of windows can make use of Hard Drive upto 8TB and a single file size of upto 4GB.
A 64-bit version of windows can make use of Hard Drive larger than 8TB and a file size larger than 4GB.
Compatibility:
A 32-bit version of windows can be installed in computers having either 32-bit or 64-bit processor.
A 64-bit version of windows can only be installed in computers having 64-bit processor.
Compatibility Issues in Installing software:
Attempting to run a software program that was built for 32-bit version on a 64-bit version of windows may sometimes cause the program to crash and compatibility error occurs.
A program that was built for 64-bit version cannot be installed on 32-bit versions of windows.
As you can see, a sound argument can be made for both the cases and youll have to determine if the differences will benefit your situation and computing future and Ill leave the ultimate decision up to you.