Category: General IT

Which is best to Buy and what is the difference between Core i3, Corei5 and Core i7 Processors?

what is the difference between Core i3, Core i5 and Core i7 Processors?

Intel Core is a brand name used for various mid-range to high-end consumer and business microprocessors made by Intel.

Intel Corporation is an American multinational semiconductor chip maker corporation headquartered in Santa Clara, California, United States and the world’s largest semiconductor chip maker, based on revenue. It is the inventor of the x86 series of microprocessors, the processors found in most personal computers. Intel Corporation, founded on July 18, 1968, is a portmanteau of Integrated Electronics (though a common misconception is that “Intel” is from the word intelligence).

The current Corporate Logo, used since 2005.

Intel Core i3, Core i5, and Core i7 CPUs have been around for over a year now, but some buyers still get stumped whenever they attempt to build their own systems and are forced to choose among the three. With the more recent Sandy Bridge architecture now on store shelves, we expect the latest wave of buyers to ask the same kind of questions.

Generally speaking, Core i7s are better than Core i5s, which are in turn better than Core i3s. Nope, Core i7 does not have seven cores nor does Core i3 have three cores. The numbers are simply indicative of their relative processing powers.

Their relative levels of processing power are also signified by their Intel Processor Star Ratings, which are based on a collection of criteria involving their number of cores, clock speed (in GHz), size of cache, as well as some new Intel technologies like Turbo Boost and Hyper-Threading.

Core i3s are rated with three stars, i5s have four stars, and i7s have five. If you’re wondering why the ratings start with three, well they actually don’t. The entry-level Intel CPUs — Celeron and Pentium — get one and two stars respectively.

Intel

Number of cores:

The more cores there are, the more tasks (known as threads) can be served at the same time. The lowest number of cores can be found in Core i3 CPUs, i.e., which have only two cores. Currently, all Core i3s are dual-core processors.

Currently all Core i5 processors, except for the i5-661, are quad cores. The Core i5-661 is only a dual-core processor with a clock speed of 3.33 GHz. Remember that all Core i3s are also dual cores. Furthermore, the i3-560 is also 3.33GHz, yet a lot cheaper. Sounds like it might be a better buy than the i5.

Even if the i5-661 normally runs at the same clock speed as Core i3-560, and even if they all have the same number of cores, the i5-661 benefits from a technology known as Turbo Boost.

Intel Turbo Boost:

The Intel Turbo Boost Technology allows a processor to dynamically increase its clock speed whenever the need arises. The maximum amount that Turbo Boost can raise clock speed at any given time is dependent on the number of active cores, the estimated current consumption, the estimated power consumption, and the processor temperature.

For the Core i5-661, its maximum allowable processor frequency is 3.6 GHz. Because none of the Core i3 CPUs have Turbo Boost, the i5-661 can outrun them when it needs to. Because all Core i5 processors are equipped with the latest version of this technology — Turbo Boost 2.0 — all of them can outrun any Core i3.

Cache size:

Whenever the CPU finds that it keeps on using the same data over and over, it stores that data in its cache. Cache is just like RAM, only faster — because it’s built into the CPU itself. Both RAM and cache serve as holding areas for frequently used data. Without them, the CPU would have to keep on reading from the hard disk drive, which would take a lot more time.

Basically, RAM minimizes interaction with the hard disk, while cache minimises interaction with the RAM. Obviously, with a larger cache, more data can be accessed quickly. All Core i3 processors have 3MB of cache. All Core i5s, except again for the 661 (only 4MB), have 6MB of cache. Finally, all Core i7 CPUs have 8MB of cache. This is clearly one reason why an i7 outperforms an i5 — and why an i5 outperforms an i3.

Hyper-Threading:

Strictly speaking, only one thread can be served by one core at a time. So if a CPU is a dual core, then supposedly only two threads can be served simultaneously. However, Intel has introduced a technology called Hyper-Threading. This enables a single core to serve multiple threads.

For instance, a Core i3, which is only a dual core, can actually serve two threads per core. In other words, a total of four threads can run simultaneously. Thus, even if Core i5 processors are quad cores, since they don’t support Hyper-Threading (again, except the i5-661) the number of threads they can serve at the same time is just about equal to those of their Core i3 counterparts.

This is one of the many reasons why Core i7 processors are the crème de la crème. Not only are they quad cores, they also support Hyper-Threading. Thus, a total of eight threads can run on them at the same time. Combine that with 8MB of cache and Intel Turbo Boost Technology, which all of them have, and you’ll see what sets the Core i7 apart from its siblings.

The upshot is that if you do a lot of things at the same time on your PC, then it might be worth forking out a bit more for an i5 or i7. However, if you use your PC to check emails, do some net banking, read the news, and download a bit of music, you might be equally served by the cheaper i3.

Another factor in this deliberation is that more and more programs are being released with multithread capability. That is they can use more than one CPU thread to execute a single command. 

So things happen more quickly. Some photo editors and video editing programs are multi-threaded, for example. However, the Internet browser you use to access Net banking or your email client is not, and is unlikely to be in the foreseeable future.

The current Intel inside Logo, used since 2011–present

Difference between Core i3, Core i5 and Core i7

Core i3:

* Entry level processor.
* 2-4 Cores
* 4 Threads
* Hyper-Threading (efficient use of processor resources)
* 3-4 MB Cache
* 32 nm Silicon (less heat and energy)

Core i5:

* Mid-range processor.
* 2-4 Cores
* 4 Threads
* Turbo Mode (turn off core if not used)
* Hyper-Threading (efficient use of processor resources)
* 3-6 MB Cache
* 32-45 nm Silicon (less heat and energy)

Core i7:

* High end processor.
* 4 Cores
* 8 Threads
* Turbo Mode (turn off core if not used)
* Hyper-Threading (efficient use of processor resources)
* 4-8 MB Cache
* 32-45 nm Silicon (less heat and energy)

Hopefully this gives you some insight for your next CPU selection.

Happy computing!!

source: pcworld, wikipedia,Intel

Cloud computing:

Cloud computing:

Everywhere you look people are talking about cloud computing, but what is it and why do we need it?

One of the most irritating pieces of jargon to emerge in the last couple of years is ‘the cloud’: it seems almost everyone is talking about cloud products and services in TV adverts. But what precisely is the cloud and how can computer users benefit from it?

 The cloud is nothing new or extraordinary, but many companies have sensed money-spinning opportunities, so the concept is being hyped in adverts. Though there’s little mystery about it, many people are confused about ‘the cloud’.

 

What is the cloud?

Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a metered service over a network (typically the Internet).

Cloud computing provides computation, software, data access, and storage resources without requiring cloud users to know the location and other details of the computing infrastructure.

End users access cloud based applications through a web browser or a light weight desktop or mobile app while the business software and data are stored on servers at a remote location. Cloud application providers strive to give the same or better service and performance as if the software programs were installed locally on end-user computers.

 If you access email online through web mail services such as Hotmail or G mail, then you use cloud-based email. Online storage services such as Drop box and Windows Live SkyDrive exist in the cloud, while Google Docs is a cloud-based office suite.

The key difference is that, in most cases, the code that runs the software is stored on a web server and does not run from your PC’s hard disk. You interact with the application over the internet connection, usually over a web browser.

Cloud services are often free, supported by advertising or the lure of improved, premium packages. A good example of this is online storage. Drop box, for instance, provides a free version of its cloud-storage service that includes 2GB of space for user files, with paid-for options for those needing more online storage.

Rather than keeping files solely on a PC, a cloud-storage service like Dropbox also puts copies online. Files held in cloud-storage services are accessed using a username and password.

Even though you’re accessing them via ‘the cloud’, such files do have a physical existence – on a server computer. It doesn’t matter where this server computer is: files stored in the cloud can be accessed anywhere, as long as there’s an internet connection.

Why is cloud talk so popular?

So why have so many companies chosen now to promote online applications? The reasons are availability and cost.

 Widespread access to affordable broadband with reliable mobile internet on 3G networks and the proliferation of Wi-Fi hotspots means people can get online almost anywhere and everywhere.

 As for cost, the economics of running web applications improve with rising use. With online storage, for example, as more people subscribe to services, economies of scale kick in and the cost per gigabyte falls.

 The growing popularity of online applications has led to more companies taking an interest. Apple, has recently launched iCloud, its attempt to get people to connect together all their (Apple) devices.

The idea is that you can create a document or snap a photo on an iPad, view them on an iPhone and edit them later on an iMac desktop computer. All the versions are synchronised automatically, without the user having to think too much about how it all happens: no matter which device you’re on, the stuff you need will be there.

Big cloud ideas

There are many other cloud-based tools and services that you can access right now, and many of them are free. Projects can be easily shared with friends and colleagues and cloud tools remove the frustration of not being able to access something because it is on another computer.

Cloud-based office suites, for instance, work just like Microsoft Office but exist entirely online, meaning they’re accessed and operated from inside a web browser, already mentioned Google Docs , but there are numerous alternatives, including Zoho and Microsoft’s own Office 365.

There are disadvantages – most features can be used only when online, as the software behind tools for text creation and editing are on the server, not your PC. Both Google Docs and Zoho offer some way to work offline using plug-ins for Microsoft Office.  Indeed, for the truly cloud-committed there are even operating systems and computers that are wholly reliant on the cloud. Google is leading the way with Chrome OS (which is distinct from Chrome, the web browser with which you may already be familiar).

Chrome OS is based on existing Google software, such as the Chrome web browser and Google Docs. Switch on a Chromebook and you will see various web-based apps, such as Gmail, Google Calendar and so on. Much like with smartphones and tablet computers, additional apps can be installed from the Chrome Web Store.

A big advantage of Chromebooks is that they run very quickly, as little computing work is performed on the device itself. However, the downside is that if you happen to be in a location where internet access is unavailable then a Chromebook is useless.

Another option, which isn’t tied into Google’s own cloud services, is Joli OS. It works in a similar way to Chrome OS, but can be installed on almost any computer (and even alongside Windows).

Useful cloud ideas

Fortunately, many cloud services have immediate and obvious benefits. Consider the music service Spotify, for instance. While an application must be downloaded to your local computer in order to use the service, the audio files and your playlists exist on Spotify’s server.

As well as saving on hard disk storage, this allows you to download the Spotify software to any other computer, log into your account and instantly access your music library.

Spotify does allow Premium subscribers to download tunes locally (so they can be played when internet access isn’t available) but the service embodies the cloud concept.

Remember, too, that cloud computing isn’t all about storing things online: as we’ve seen, it is as much about keeping devices synchronised. As more people switch between tablets, smartphones, laptops and desktop PCs, there is a need for files available on one to be available on the other.

Traditional synchronisation methods, such as emailing files to yourself or putting them on a USB memory key, are clunky and unreliable – but cloud services can do away with such inconvenience.

Indeed, this is the key function of Apple’s iCloud service: total synchronisation of files between multiple Apple devices. 

A service called Dropbox works in a similar fashion is free to use and can be up and running in moments. It stores files online but also keeps synchronised local copies on any computer with Dropbox installed, so your files are always accessible and up-to-date.

Office in the cloud

While Dropbox is a formidable file-synchronisation tool it doesn’t provide any of the tools needed to work with documents (though some smartphone versions of the Dropbox app do have built-in file viewers).

Even though you can access files from almost any internet-connected device, editing them still requires the relevant software (such as Microsoft Word) to be installed.

As a complement, or alternative, consider making use of a cloud-based office suite. We’ve mentioned Google Docs, which is a popular example. It’s free to use, though does require a Google account.

To try it, launch a web browser and visit the Google Docs page and either log in or click the ‘Sign up for a new Google Account’ and follow the prompts.

Once signed in a menu bar will appear at the top of the browser window – click Documents to view the Google Docs home page. Now click the Create new button and choose the type of document you’d like to create.

Choosing Document, for example, will open a blank word-processor document. It can be named by clicking the ‘Untitled document’ label at the top of the page, while the document itself is fairly self-explanatory (it works just like Word).

 Changes are saved automatically and when you next log in to your Google Docs account – from any web browser – your documents will be available to be worked on right away.

 Get creative

As well as practical uses, the cloud can also be a useful creative tool. The YouTube video editor can make simple changes to videos without needing additional software.

Go to YouTube’s editor page and any clips you’ve uploaded to YouTube can be edited by adding text and transitions, cutting clips, or changing the brightness and contrast. There is also a large selection of Creative Commons video clips and sounds, too.

Windows Live, available on Windows 7, includes Windows Live Photo Gallery and Movie Maker, both of which are cloud-based.

 Pixlr is an online photo editor that runs entirely from your web browser. Go to the Pixlr website and either click Open photo editor or Retro vintage effects. The former is a standard photo-editing suite with plenty of easy-to-use tools for improving photos, while the Retro vintage effects tool is a nice way to add interesting effects.

Security in the cloud :

Clearly, the cloud has many benefits but it also brings security and privacy concerns. Cloud data is stored on server computers in unknown locations around the world.

 So long as a user of a cloud service can access their data, this hardly matters – but what happens when things go wrong? If there’s a fire in the cloud company’s server room or the firm goes bust, where is your data?

 These aren’t fantasy concerns. In April 2011, Amazon’s web-services business – known as AWS – became unavailable for several days following problems at a data center in Northern Virginia in the US.

Amazon runs a vast number of web servers and allows other companies to rent space on them, so the websites of dozens of big-name firms went offline with AWS.

This highlights a problem with relying entirely on cloud-based tools and services: you have little control over what happens to data stored elsewhere. Obviously Amazon is a big, well-resourced company and it sorted the problem – but many of its customers were inconvenienced for days.

And don’t forget privacy. While any trustworthy company offering cloud-based services will abide by a privacy policy, leaks can happen. Recently, Dropbox had to admit that for a period of hours, an error left all accounts exposed and at risk of being compromised.

Every cloud…

The cloud is really just a bunch of server computers on the internet and if something goes wrong, your data could be stolen or even lost forever – so it’s a good idea to back up stuff stored in the cloud.

 However, used wisely, the cloud can provide easy access to files and applications. Being able to get to what you need, whenever and wherever you are, is how computing should work, and that’s what the cloud does.

The Benefits of Cloud Computing:

There are lots of advantages to using cloud computing for international companies. One of the major ones is the flexibility that it offers. Cloud computing means that staff can access the files and data that they need even when they’re working remotely and/or outside office hours.

Flexibility & Mobility:

  • As long as they can get on the Internet, staff can access information from home, on the road, from clients’ offices or even from a smartphone such as a BlackBerry or iPhone. Staff can also work collaboratively on files and documents, even when they’re not physically together. Documents can simultaneously be viewed and edited from multiple locations.

Highly Automated:

  • Cloud computing can be very quick and easy to get up and running. Consider, for example, how quickly you can set up a Gmail or Hotmail account and start emailing – it takes minutes and all you need is a computer and the Internet. Downloading and installing software, on the other hand, takes much longer.

 Reduced Cost:

  • Cloud computing is often cheaper and less labour-intensive for companies too. There is no need to buy and install expensive software because it’s already installed online remotely and you run it from there, not to mention the fact that many cloud computing applications are offered free of charge. The need to pay for extensive disk space is also removed. With cloud computing, you subscribe to the software, rather than buying it outright. This means that you only need to pay for it when you need it, and it also offers flexibility, in that it can be quickly and easily scaled up and down according to demand. This can be particularly advantageous when there are temporary peaks in demand, such as at Christmas or in summer, for example.

Increased Storage:

  • A major advantage of using cloud computing for many companies is that because it’s online, it offers virtually unlimited storage compared to server and hard drive limits. Needing more storage space does not cause issues with server upgrades and equipment – usually all you need to do is increase your monthly fee slightly for more data storage.

 Gartner: Seven cloud-computing security risks:

1.  Privileged user access. Sensitive data processed outside the enterprise brings with it an inherent level of risk, because outsourced services bypass the “physical, logical and personnel controls” IT shops exert over in-house programs. Get as much information as you can about the people who manage your data. “Ask providers to supply specific information on the hiring and oversight of privileged administrators, and the controls over their access,” Gartner says.

2.  Regulatory compliance. Customers are ultimately responsible for the security and integrity of their own data, even when it is held by a service provider. Traditional service providers are subjected to external audits and security certifications. Cloud computing providers who refuse to undergo this scrutiny are “signalling that customers can only use them for the most trivial functions,” according to Gartner.

3.  Data location. When you use the cloud, you probably won’t know exactly where your data is hosted. In fact, you might not even know what country it will be stored in. Ask providers if they will commit to storing and processing data in specific jurisdictions, and whether they will make a contractual commitment to obey local privacy requirements on behalf of their customers, Gartner advises. 

4.  Data segregation. Data in the cloud is typically in a shared environment alongside data from other customers. Encryption is effective but isn’t a cure-all. “Find out what is done to segregate data at rest,” Gartner advises. The cloud provider should provide evidence that encryption schemes were designed and tested by experienced specialists. “Encryption accidents can make data totally unusable, and even normal encryption can complicate availability,” Gartner says.

5.  Recovery. Even if you don’t know where your data is, a cloud provider should tell you what will happen to your data and service in case of a disaster. “Any offering that does not replicate the data and application infrastructure across multiple sites is vulnerable to a total failure,” Gartner says. Ask your provider if it has “the ability to do a complete restoration, and how long it will take.”

6.  Investigative support. Investigating inappropriate or illegal activity may be impossible in cloud computing, Gartner warns. “Cloud services are especially difficult to investigate, because logging and data for multiple customers may be co-located and may also be spread across an ever-changing set of hosts and data centres. If you cannot get a contractual commitment to support specific forms of investigation, along with evidence that the vendor has already successfully supported such activities, then your only safe assumption is that investigation and discovery requests will be impossible.”

7.  Long-term viability. Ideally, your cloud computing provider will never go broke or get acquired and swallowed up by a larger company. But you must be sure your data will remain available even after such an event. “Ask potential providers how you would get your data back and if it would be in a format that you could import into a replacement application,” Gartner says.

source: infoworld, computeractive,wikipedia

Jargon Buster:

Jargon Buster:  A

Computing terms explained in plain English

1.  AAC

Advanced Audio Coding. A type of music file.

2.  Access point

Links wireless network users to a wired network.

3.  ActiveX

Technology for adding extra features to a web browser.

4.  Add-in

Generic term for a piece of software that adds extra features to another program.

5.  Address bar

An area of a web browser into which internet addresses can be typed. Pressing enter then directs the browser to that exact page. The address bar is sometimes confused with a search bar. Typing addresses into a search field will produce a list of websites that may or may not match what you are looking for.

 6.  ADF

Automatic Document Feeder. A device that feeds sheets of paper into a photocopier or scanner, one by one.

 7.  ADSL

Asymmetric Digital Subscriber Line. A technology that converts a standard phone line into a broadband internet connection.

 8. ADSL2

A newer, faster type of ADSL broadband.

 9. Adware

Software that displays adverts.

10. Aero

The technology that provides fancy window effects in Windows 7 and some versions of Vista.

 11. AGP

Accelerated Graphics Port. A slot used to connect graphics cards in older computers.

12. AI

Artificial Intelligence. A computer program designed to mimic the behaviour of humans or animals.

 13. AIFF

Audio Interchange File Format. A digital audio format often associated with Apple Mac computers.

 14. Ajax

Asynchronous JavaScript and XML. A technology that allows websites to fetch fresh information from the web without loading a new page.

15. Analogue

A signal whose value varies over time, as opposed to a digital signal which is either on or off.

16. Android

An operating system for portable computers and mobile phones, based on the Linux system that is used on some PCs.

 17. Animated GIF

A type of simple animation found on the internet.

18. Annotation

A comment on a document, rather like a note jotted down on a paper document.

 19. Anti-virus

Software that protects against and removes computer viruses.

 20. Aperture

An opening that controls the amount of light entering a camera lens.

 21. API

Application Programming Interface. A system built into a program so that other programs can work with it.

22. App

A small program designed to run on a phone or handheld computer (short for application). Could be a game, utility or any other type of program.

23. App

A small program designed to run on a phone or handheld computer (short for application). Could be a game, utility or any other type of program.

24. Applet

A small program, often one that runs within a larger program to perform a specific task.

 25. Application

A computer program that performs a specific task, such as Microsoft Word for creating documents.

 26. Aspect ratio

A measurement of the shape of a display. Traditional computer screens are 4:3. Widescreen displays are 16:9 or 16:10.

 27. ATRAC

Adaptive Transform Acoustic Coding. A type of music file used by some older Sony players.

28. Attachment

A computer file, such as a word-processing document, sent along with an email.

29. Audible

A company that sells downloadable audio books. Also used to describe the files it uses.

30. Audio book

A book read aloud and recorded on tape, CD or as a digital file.

31. Autocorrect

A technology that corrects words as you type them.

 32. Auto play

A Windows feature that allows a program to be automatically started when a disk is connected to a computer

 33. Auto sum

A tool in Excel that provides a quick total of the selected cells.

 34. Auto trace

A tool in some photo editors that attempts to trace an image, converting it into vector graphics that can be resized.

 35. AV

Audio/Visual. Any device that can show video or play sound.

 36. Avatar

A graphic or icon used to represent a computer user, either online or in a video game.

 37. AVCHD

Advanced Video Coding High Definition. A standard for storing high-definition video. AVCHD discs can be played by most Blu-ray players.

38. AVI

Audio Video Interleave. A type of video file. AVI is known as a container format, as it can hold many types of audio and video.

39. AV Sender

A device that sends audio and video signals wirelessly.

source:computeractive

Researchers spot scammers using fake browser plug-ins

Fake Browser Plug-in—A New Vehicle for Scammers:

Security researchers from Symantec have spotted a fake browser plugin-in currently circulating in the wild.

 How the infection takes place:

The scenario is very simple: the victim is lured into watching some video; but instead of asking the victim to share/like the video, (which we have seen in many scams) the scammers present the victim with a fake plug-in download image, which is required to see the video.

Once the end users are tricked into installing the fake YouTube themed browser extension, their User-Agent info is retrieved and accordingly, the fake plug-in is downloaded. For the time being, only Mozilla Firefox and Google Chrome plug-ins are being used.

The scam is currently circulating, using the [Video] Leakead video of Selena Gomez and Justin Beiber [NEW HOT!!] theme.

facebook / youtube

This isn’t the first time that scammers are relying on fake browser plugins and extensions as a propagation vehicle for their scams. In December 2011, researchers from WebSense have detected a malicious campaign where the scammers were successfully hijacking Facebook accounts using bogus browser extensions

 Scammers are always looking for different techniques to lure users .

Facebook users are advised to be extra vigilant when interacting with content shared on the most popular social networking site.

Additional Facebook Security Tips:

  • Review your security settings and consider enabling login notifications. They’re in the drop-down box under Account on the upper, right-hand corner of your Facebook home page.
  • Don’t click on strange links, even if they’re from friends, and notify the person if you see something suspicious.
  • Don’t click on friend requests from unknown parties.
  • If you come across a scam, report it so that it can be taken down.
  • Don’t download any applications you aren’t certain about.
  • For using Facebook from places like hotels and airports, text “otp” to 32665 for a one-time password to your account.
  • Visit Facebook’s security page, and read the items “Take Action” and “Threats.”

source: symantec,zdnet

UEFI:

UEFI: The acronym stands for Unified Extensible Firmware Interface and is designed to be more flexible than its venerable predecessor.

Wave goodbye to BIOS, say hello to UEFI, a new technology that will drastically reduce start-up times.

The next generation of home computers will be able to boot up in just a few seconds, as 25-year-old BIOS technology makes way for new start-up software known as UEFI.

BIOS technology, which has been used to boot up computers since 1979, was never designed to last as long as it has, and is one of the reasons modern computers take so long to get up and running.

By contrast, UEFI – which stands for Unified Extensible Firmware Interface – has been built to meet modern computing needs, and will soon be the pre-eminent technology in many new computers, enabling them to go from ‘off’ to ‘on’ in seconds.

Pronounced “bye-ose,” BIOS is an acronym for basic input/output system. The BIOS is built-in software that determines what a computer can do without accessing programs from a disk. On PCs, the BIOS contains all the code required to control the keyboard, display screen, disk drives, serial communications, and a number of miscellaneous functions.

The BIOS is typically placed on a ROM chip that comes with the computer (it is often called a ROM BIOS). This ensures that the BIOS will always be available and will not be damaged by disk failures. It also makes it possible for a computer to boot itself.

 Below are the major BIOS manufacturers:

When you turn on your computer, several events occur automatically:

  1. The CPU “wakes up” (has power) and reads the x86 code in the BIOS chip.
  2. The code in the BIOS chip runs a series of tests, called the POST for Power On Self-Test, to make sure the system devices are working correctly. In general, the BIOS:
    • Initializes system hardware and chipset registers
    • Initializes power management
    • Tests RAM (Random Access Memory)
    • Enables the keyboard
    • Tests serial and parallel ports
    • Initializes floppy disk drives and hard disk drive controllers
    • Displays system summary information
  3. During POST, the BIOS compares the system configuration data obtained from POST with the system information stored on a CMOS – Complementary Metal-Oxide Semiconductor – memory chip located on the motherboard. (This CMOS chip, which is updated whenever new system components are added, contains the latest information about system components.)
        4. After the POST tasks are completed, the BIOS looks for the boot program responsible for loading the operating                         system.  Usually, the BIOS looks on the floppy disk drive A: followed by drive C:
        5. After being loaded into memory, the boot program then loads the system configuration information (contained in                the registry in a Windows environment) and device drivers.
       6. Finally, the operating system is loaded, and, if this is a Windows environment, the programs in the Start Up folder                  are executed.

            The BIOS has two fundamental weaknesses. Firstly, it is based on 16-bit assembly code and cannot directly address          the latest 64-bit hardware, and secondly, there are no set standards for specifications, so manufacturers come up with            their own versions.

The participants of the UEFI Forum wanted to set this straight. From the outset, each process has been precisely defined. Thus, the boot process or platform initialization (PI) is clearly described in phases. Immediately after powering up the PC, the Pre-EFI Initialization (PEI) is executed, which initializes the CPU, memory and chipset. This is then followed by the Driver Execution Environment (DXE). At this point, the rest of the hardware is initialized. This process saves the time required for booting because UEFI can integrate various drivers that need not be reloaded during booting. Thanks to these drivers, the user already has access to network card, including features such as network booting and remote assistance at the early stage of the boot process. With the graphics processor enabled, a fancy user interface is also presented.

However, biggest time-saving feature of UEFI is the fact that not all the installed hard drives will be scanned for the boot loader, since the boot drive is set during the installation of the operating system in the UEFI. The default boot loader is run without consuming much time searching the drives.

The faster boot time is not the only advantage of UEFI; applications can be stored on virtually any non-volatile storage device installed on the PC. For example, programs and diagnostic tools such as antivirus or system management tools can be run from an EFI partition on the hard drive. This feature will be very useful to original equipment manufacturers (OEM), who can distribute systems with extra functions in addition to the standard EFI firmware stored on the motherboard’s ROM.

UEFI fully supports 3 TB hard drives

The classic BIOS can access only up to 232 sectors of 512 bytes in size, which  translates to a total of 2 TB. So the upcoming 3 TB variants of Western Digital Caviar Green and Seagate Barracuda XT won’t be fully compatible with the current BIOS. Seagate uses larger sectors to make the full capacity usable on Windows, but the BIOS cannot boot from this drive.

UEFI, on the other hand, works with GUID partition table (GPT) with 64-bit long addresses and can handle up to 264 sectors that address up to 9 Zettabyte (1 zettabyte equals 1 billion terabytes).

The GUID Partition Table (GPT) was introduced as part of the Unified Extensible Firmware Interface (UEFI) initiative. GPT provides a more flexible mechanism for partitioning disks than the older Master Boot Record (MBR) partitioning scheme that was common to PCs.

A partition is a contiguous space of storage on a physical or logical disk that functions as if it were a physically separate disk. Partitions are visible to the system firmware and the installed operating systems. Access to a partition is controlled by the system firmware before the system boots the operating system, and then by the operating system after it is started.

MBR disks support only four partition table entries. If more partitions are wanted, a secondary structure known as an extended partition is necessary. Extended partitions can then be subdivided into one or more logical disks.

GPT disks can grow to a very large size. The number of partitions on a GPT disk is not constrained by temporary schemes such as container partitions as defined by the MBR Extended Boot Record (EBR).

The GPT disk partition format is well defined and fully self-identifying. Data critical to platform operation is located in partitions and not in unpartitioned or “hidden” sectors. GPT disks use primary and backup partition tables for redundancy and CRC32 fields for improved partition data structure integrity. The GPT partition format uses version number and size fields for future expansion. Each GPT partition has a unique identification GUID and a partition content type, so no coordination is necessary to prevent partition identifier collision. Each GPT partition has a 36-character Unicode name. This means that any software can present a human-readable name for the partition without any additional understanding of the partition.

Below given Windows OS supports GPT:

  • Windows 7
  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Vista
  • Windows Server 2003 SP1
  • Windows Server 2003 (64-bit)
  • Windows XP x64 edition
Source: wikipedia, chip, MSDN

USB

USB (Universal Serial Bus) :

Universal Serial Bus (USB) is an industry standard which defines the cables,connectors and protocols used for connection, communication and power supply between computers and electronic devices.

USB was designed to standardise the connection of computer peripherals such as mice, keyboards, digital cameras, printers, portable media players, disk drives and network adapters to personal computers, both to communicate and to supply electric power, but it has become commonplace on other devices such as smart phones, PDAs and video game consoles. USB has effectively replaced a variety of earlier interfaces such as serial and parallel ports, as well as separate power chargers for portable devices.

USB 1.0 :

A group of seven companies (Compaq, DEC, IBM, Intel, Microsoft, NEC and Nortel) began development on USB in 1994:

The first USB was made by Intel in 1995.

The original USB 1.0 specification, which was introduced in January 1996, defined data transfer rates of 1.5 Mbit/s “Low Speed” and 12 Mbit/s “Full Speed”.

The first widely used version of USB was 1.1, which was released in September 1998.

USB 2.0 (High-speed USB)  :

The USB 2.0 specification was released in April 2000 and was standardized by the USB Implementers Forum (USB-IF) at the end of 2001. Hewlett-Packard, Intel,  Lucent Technologies (now Alcatel-Lucent), NEC and Philips jointly led the initiative to develop a higher data transfer rate, with the resulting specification achieving 480 Mbit/s, a forty fold increase over the original USB 1.1 specification.

USB 2.0 (High-speed USB) provides additional bandwidth for multimedia and storage applications and has a data transmission speed 40 times faster than USB 1.1.

The USB 3.0 (Super Speed USB) :

The USB 3.0 (Super Speed USB) standard became official on Nov. 17, 2008.

 USB 3.0 boasts speeds 10 times faster than USB 2.0 at 4.8 gigabits per second. It’s meant for applications such as transferring high-definition video footage or backing up an entire hard drive to an external drive.

 As hard drive capacity grows, the need for a high-speed data transfer method also increases.

  • Super Speed USB has a 5 Gbps signalling rate offering 10x performance increase over Hi-Speed USB.
  • Super Speed USB is a Sync-N-Go technology that minimizes user wait-time.
  • Super Speed USB will provide Optimized Power Efficiency. No device polling and lower active and idle power requirements.
  • Super Speed USB is backwards compatible with USB 2.0. Devices interoperate with USB 2.0 platforms. Hosts support USB 2.0 legacy devices.

Online banking safety tips:

The security level of your Internet connection depends on the kind of settings you make.

Keep the following rules in mind if you want to tread a safe path as a user of online banking.

General Information:

  • Regularly update the operating system. In this way, you can prevent security holes being created.
  • Always use an updated version of a browser.
  • Use a virus security and update it regularly (automatic update recommended).
  • Install and activate a firewall.
  • Never save your PIN and PAN on the computer.
  • Use software only from a trust worthy source.
  • When carrying out online transactions, close all other applications. While banking online, you are advised to avoid chatting, downloading files or surfing the Internet.
  • Regularly save your data on a removable medium.
  • Select a secure password: it must have at least six characters and should be a combination of letters, numbers and special characters.
  • Note the emergency telephone number of your financial institution so that in case of emergency, you can contact someone at any time.
  • Regularly check the returns (at least once in a month) using bank statements. In case of suspicious entries, inform the financial institution immediately.

While logging in:

  • First close all browser windows and then only open a new window.
  • Manually enter the URL in the address bar of the browser, do not click on links.
  • Ensure that the address entered in the address bar starts with “https”.
  • The security lock in the browser bar must be visible and closed.
  • If necessary, check the security certificate.

After logging in:

  • Do not open any other browser window while carrying out a transaction.
  • If conspicuous error messages appear, close the operation.

After logging out:

  • Make sure your end your online banking session by clicking on “Logout”.
  • Delete the cache and the history of your browser.
  • Delete the cache and the history of your browser.
  • Close the browser window.

Rules for online banking:

  • To avoid falling prey to data fishing, you are advised to steer clear of the following risk factors:
    • Never reveal your access data such as PIN or PAN personally, or through telephone or e-mail. No authentic bank will ever ask you for such information.
    • Never carry out online transactions in an Internet café.
    • Never run a so-called security update for Internet banking if you are asked to do so through e-mail. Banks never send such updates through e-mails. Visit the home page of your financial institution and check whether it mentions such an update.

Additional tip:

Another option of banking online without fear of a phishing attack is to use a signature-protected HBCI (Home Banking Computer Interface) with a chip card. This type of Internet banking is very convenient since one does not need to enter the PAN. The guarded entry of a PIN is a further advantage, where a key logger or a Trojan cannot access the PIN you enter. For this interface, you need a relevant chip card reader with an independent PIN pad.

Source: chip

VGA standard to die by 2015

The venerable VGA port is about to die, with the biggest names in the industry including Samsung, Lenovo, LG, Dell, Intel and AMD agreeing to phase it out with-in five years in favour of today’s  HDMI and Display port standards.

The newer standards are better suited to TV and computer display applications respectively, and allow for more flexibility, lower power consumption, higher resolutions, 3D and digital content protection schemes.

The analog VGA standard is nearing 20 years old already.

Video Graphic Array (VGA) refers specifically to the display hardware first introduced with IBM PS/2 line of computers in 1987.

A VGA Connector is the most common port on any computer. It connects any standard monitor to the CPU or you can use this port on your laptop to connect an external monitor or projector. It is also commonly known as RGB Connector, D Sub 15, mini sub D15 and mini D15 as it consist of a total of 15 pins in three rows.VGA connections are commonly colour-coded with blue plastic and labels.

Digital Visual Interface (DVI) is a video interface standard designed to provide very high visual quality on digital display devices such as flat panel LCD computer displays and digital projectors. DVI connections are usually color-coded with white plastic and labels.

It was developed by an industry consortium, the Digital Display Working Group (DDWG) to replace the “legacy analog technology” VGA connector standard. It is designed for carrying uncompressed digital video data to a display. It is partially compatible with the High-Definition Multimedia Interface (HDMI).

High-Definition Multimedia Interface (HDMI) is the most advanced digital audio and video connector existent today on the market. A single cable provides not only High Definition video content on your display, but also outstanding sound quality with 8-channels digital audio at 192 kHz sample rate with 24 bits/sample. It is the best and an affordable solution for connecting your HD-enabled devices such as HD DVD player or Blu-ray player to your High-Definition TV.

HDMI cables have 19-pin connectors.

Source: chip , Wikipedia