All the interview questions together

Sunday, January 4, 2009

What is a Domain and Workgroup? Highlight advtgs and disadvtgs.




Domain: “A
domain is a group of computers and devices on a network that are
administered as a unit with common rules and procedures. Within the
Internet, domains are defined by the IP address. All devices sharing
a common part of the IP address are said to be in the same domain.”
-
www.murdoch.edu.au/cwisad/glossary.html

There
is no real limit to the amount of computers on a domain, it is common
to see domains with over 2000 computers/devices (Nodes) in it. For
networks with that many workstation, you will need enterprise level
software such as SMS, Exchange etc. to effectively manage it. If you
are using Windows XP as an OS... ONLY Windows XP Pro is capable of
operating in a Domain environment. You can mix OS clients on a
domain, you can have Macintosh, Windows, Linux, Unix all under the
same domain sharing resources as needed.

A domain usually
costs more money to setup because there is more hardware and software
required (Such as a Domain Controller and a Server Level OS) to get
it configured properly.

In a domain, all the machines have
domain
level

admin accounts on the local administrator group. What this means is,
you can effectively manage any and all of the computers on the domain
as long as your user account is a member of the Domain Admin
group.

Workgroup:
Workgroup
computing occurs when all the individuals have computers connected to
a network (a group of two or more computer systems linked together)
that allows them to send e-mail to one another, share data files, and
other resources such as printers. Normally, a workgroup is limited to
10 network devices/computers. Also, both Windows XP Pro and Home can
function in a workgroup environment.
Your typical "out of
box" system is setup to be used on a workgroup.
If you want,
you can change the network type from workgroup to domain and
viceversa. Machines setup in a Domain environment are much easier to
manage than workgroups when it comes to network resources (Shared
Files, Shared Printers, etc.)
Since workgroup machines might have
different account names, you really have to know the admin acccount
for each specific machine in order to effectively manage the
workgroup.





What
are the Different types of RAID?




  1. What does RAID stand
    for ?



In
1987, Patterson, Gibson and Katz at the University of California
Berkeley, published a paper entitled "A Case for Redundant
Arrays of Inexpensive Disks (RAID)" . This paper described
various types of disk arrays, referred to by the acronym RAID. The
basic idea of RAID was to combine multiple small, inexpensive disk
drives into an array of disk drives which yields performance
exceeding that of a Single Large Expensive Drive (SLED).
Additionally, this array of drives appears to the computer as a
single logical storage unit or drive.


The
Mean Time Between Failure (MTBF) of the array will be equal to the
MTBF of an individual drive, divided by the number of drives in the
array. Because of this, the MTBF of an array of drives would be too
low for many application requirements. However, disk arrays can be
made fault-tolerant by redundantly storing information in various
ways.


Five
types of array architectures, RAID-1 through RAID-5, were defined by
the Berkeley paper, each providing disk fault-tolerance and each
offering different trade-offs in features and performance. In
addition to these five redundant array architectures, it has become
popular to refer to a non-redundant array of disk drives as a RAID-0
array.




  1. Data Striping



Fundamental
to RAID is "striping", a method of concatenating multiple
drives into one logical storage unit. Striping involves partitioning
each drive's storage space into stripes which may be as small as one
sector (512 bytes) or as large as several megabytes. These stripes
are then interleaved round-robin, so that the combined space is
composed alternately of stripes from each drive. In effect, the
storage space of the drives is shuffled like a deck of cards. The
type of application environment, I/O or data intensive, determines
whether large or small stripes should be used.


Most
multi-user operating systems today, like NT, Unix and Netware,
support overlapped disk I/O operations across multiple drives.
However, in order to maximize throughput for the disk subsystem, the
I/O load must be balanced across all the drives so that each drive
can be kept busy as much as possible. In a multiple drive system
without striping, the disk I/O load is never perfectly balanced. Some
drives will contain data files which are frequently accessed and some
drives will only rarely be accessed. In I/O intensive environments,
performance is optimized by striping the drives in the array with
stripes large enough so that each record potentially falls entirely
within one stripe. This ensures that the data and I/O will be evenly
distributed across the array, allowing each drive to work on a
different I/O operation, and thus maximize the number of simultaneous
I/O operations which can be performed by the array.


In
data intensive environments and single-user systems which access
large records, small stripes (typically one 512-byte sector in
length) can be used so that each record will span across all the
drives in the array, each drive storing part of the data from the
record. This causes long record accesses to be performed faster,
since the data transfer occurs in parallel on multiple drives.
Unfortunately, small stripes rule out multiple overlapped I/O
operations, since each I/O will typically involve all drives.
However, operating systems like DOS which do not allow overlapped
disk I/O, will not be negatively impacted. Applications such as
on-demand video/audio, medical imaging and data acquisition, which
utilize long record accesses, will achieve optimum performance with
small stripe arrays.


A
potential drawback to using small stripes is that synchronized
spindle drives are required in order to keep performance from being
degraded when short records are accessed. Without synchronized
spindles, each drive in the array will be at different random
rotational positions. Since an I/O cannot be completed until every
drive has accessed its part of the record, the drive which takes the
longest will determine when the I/O completes. The more drives in the
array, the more the average access time for the array approaches the
worst case single-drive access time. Synchronized spindles assure
that every drive in the array reaches its data at the same time. The
access time of the array will thus be equal to the average access
time of a single drive rather than approaching the worst case access
time.




  1. The different RAID
    levels




RAID-0


RAID
Level 0 is not redundant, hence does not truly fit the "RAID"
acronym. In level 0, data is split across drives, resulting in higher
data throughput. Since no redundant information is stored,
performance is very good, but the failure of any disk in the array
results in data loss. This level is commonly referred to as striping.


RAID-1


RAID
Level 1 provides redundancy by writing all data to two or more
drives. The performance of a level 1 array tends to be faster on
reads and slower on writes compared to a single drive, but if either
drive fails, no data is lost. This is a good entry-level redundant
system, since only two drives are required; however, since one drive
is used to store a duplicate of the data, the cost per megabyte is
high. This level is commonly referred to as mirroring.


RAID-2


RAID
Level 2, which uses Hamming error correction codes, is intended for
use with drives which do not have built-in error detection. All SCSI
drives support built-in error detection, so this level is of little
use when using SCSI drives.


RAID-3


RAID
Level 3 stripes data at a byte level across several drives, with
parity stored on one drive. It is otherwise similar to level 4.
Byte-level striping requires hardware support for efficient use.


RAID-4


RAID
Level 4 stripes data at a block level across several drives, with
parity stored on one drive. The parity information allows recovery
from the failure of any single drive. The performance of a level 4
array is very good for reads (the same as level 0). Writes, however,
require that parity data be updated each time. This slows small
random writes, in particular, though large writes or sequential
writes are fairly fast. Because only one drive in the array stores
redundant data, the cost per megabyte of a level 4 array can be
fairly low.


RAID-5


RAID
Level 5 is similar to level 4, but distributes parity among the
drives. This can speed small writes in multiprocessing systems, since
the parity disk does not become a bottleneck. Because parity data
must be skipped on each drive during reads, however, the performance
for reads tends to be considerably lower than a level 4 array. The
cost per megabyte is the same as for level 4.


Summary:





    • RAID-0 is the fastest
      and most efficient array type but offers no fault-tolerance.


    • RAID-1
      is the array of choice for performance-critical, fault-tolerant
      environments. In addition, RAID-1 is the only choice for
      fault-tolerance if no more than two drives are desired.


    • RAID-2
      is seldom used today since ECC is embedded in almost all modern
      disk drives.


    • RAID-3
      can be used in data intensive or single-user environments which
      access long sequential records to speed up data transfer. However,
      RAID-3 does not allow multiple I/O operations to be overlapped and
      requires synchronized-spindle drives in order to avoid performance
      degradation with short records.


    • RAID-4
      offers no advantages over RAID-5 and does not support multiple
      simultaneous write operations.


    • RAID-5
      is the best choice in multi-user environments which are not write
      performance sensitive. However, at least three, and more typically
      five drives are required for RAID-5 arrays.



  1. Possible
    aproaches to RAID


  2. Hardware
    RAID
    The hardware based system manages the RAID subsystem
    independently from the host and presents to the host only a single
    disk per RAID array. This way the host doesn't have to be aware of
    the RAID subsystems(s).


  3. The
    controller based hardware solution
    DPT's SCSI controllers are a
    good example for a controller based RAID solution.
    The
    intelligent contoller manages the RAID subsystem independently from
    the host. The advantage over an external SCSI---SCSI RAID subsystem
    is that the contoller is able to span the RAID subsystem over
    multiple SCSI channels and and by this remove the limiting factor
    external RAID solutions have: The transfer rate over the SCSI bus.


  4. The
    external hardware solution (SCSI---SCSI RAID)
    An external RAID
    box moves all RAID handling "intelligence" into a
    contoller that is sitting in the external disk subsystem. The whole
    subsystem is connected to the host via a normal SCSI controller and
    apears to the host as a single or multiple disks.
    This solution
    has drawbacks compared to the contoller based solution: The single
    SCSI channel used in this solution creates a bottleneck.
    Newer
    technologies like Fiber Channel can ease this problem, especially if
    they allow to trunk multiple channels into a Storage Area Network.
    4
    SCSI drives can already completely flood a parallel SCSI bus, since
    the average transfer size is around 4KB and the command transfer
    overhead - which is even in Ultra SCSI still done asynchonously -
    takes most of the bus time.



    • Software
      RAID



      • The
        MD driver in the Linux kernel is an example of a RAID solution
        that is completely hardware independent.
        The Linux MD driver
        supports currently RAID levels 0/1/4/5 + linear mode.


      • Under
        Solaris you have the Solstice DiskSuite and Veritas Volume Manager
        which offer RAID-0/1 and 5.


      • Adaptecs
        AAA-RAID controllers are another example, they have no RAID
        functionality whatsoever on the controller, they depend on
        external drivers to provide all external RAID functionality.
        They
        are basically only multiple single AHA2940 controllers which have
        been integrated on one card. Linux detects them as AHA2940 and
        treats them accordingly.
        Every OS needs its own special driver
        for this type of RAID solution, this is error prone and not very
        compatible.



    • Hardware
      vs. Software RAID
      Just like any other application,
      software-based arrays occupy host system memory, consume CPU cycles
      and are operating system dependent. By contending with other
      applications that are running concurrently for host CPU cycles and
      memory, software-based arrays degrade overall server performance.
      Also, unlike hardware-based arrays, the performance of a
      software-based array is directly dependent on server CPU
      performance and load.





Except for the array
functionality, hardware-based RAID schemes have very little in common
with software-based implementations. Since the host CPU can execute
user applications while the array adapter's processor simultaneously
executes the array functions, the result is true hardware
multi-tasking. Hardware arrays also do not occupy any host system
memory, nor are they operating system dependent.



Hardware arrays are also
highly fault tolerant. Since the array logic is based in hardware,
software is NOT required to boot. Some software arrays, however, will
fail to boot if the boot drive in the array fails. For example, an
array implemented in software can only be functional when the array
software has been read from the disks and is memory-resident. What
happens if the server can't load the array software because the disk
that contains the fault tolerant software has failed? Software-based
implementations commonly require a separate boot drive, which is NOT
included in the array.


What
is NAT?





Short
for
Network
Address
Translation,
an
Internet
standard that enables a
local-area
network (LAN)

to use one set of
IP
addresses

for internal traffic and a second set of addresses for external
traffic. A
NAT
box

located where the LAN meets the Internet makes all necessary IP
address translations.


NAT
serves three main purposes:


Provides
a type of
firewall
by hiding internal IP addresses


Enables
a company to use more internal IP addresses. Since they're used
internally only, there's no possibility of conflict with IP addresses
used by other companies and organizations.


Allows
a company to combine multiple ISDN connections into a single Internet
connection.


Also
see
dynamic
NAT

and
static
NAT
.






Backup
Procedures: The Different Types of Backup


Related
links:
Backup
University

Frequently
Asked Questions

Whitepapers


Full
Backup:

A
Full backup is simply backing up all files on the system. Users may
choose to update archive attributes if they plan on doing any of the
following 2 types of partial backups.


Incremental
Backup:

An
incremental backup is a backup that backs up only the files modified
since the last backup. When running an incremental backup, users need
to update the archive attribute while backing up only modified files.
Often the incremental backups are appended to the full backup set.
The result is a tape with the changes that occurred daily. This type
of backup is useful if the user wishes to have an audit trail of file
usage activity on their system and will enable them to restore a
specific days work without restoring any changes made since that
point in time. To do a full restore for 4 days after a full backup
they must restore the full backup and all 4 data sets after it.
Unlike the next type of backup.


Differential
Backup:

A
differential backup is a cumulative backup of changes made since the
last full backup. It backs up modified files only but does not update
the archive attribute. The list of files grows each day until the
next full backup is performed clearing the archive attributes. This
enables the user to restore all files changed since the last full
backup in one pass. These backups can be appended to the full as
well, but they will have to keep in mind that each set can contain a
different version of a file if that file changes daily. The data sets
will always be at least as big as the previous differential (if no
changes were made) and will continue to grow as files change. Once a
files archive attribute is set it will be backed up each day until
after the full backup resets it's attribute bit.


What
is TCP/IP?


Transmission
Control
Protocol/Internet
Protocol,
the suite of
communications
protocols

used to connect
hosts
on the
Internet.
TCP/IP uses several
protocols,
the two main ones being
TCP
and
IP.
TCP/IP is built into the
UNIX
operating
system

and is used by the Internet, making it the
de
facto standard

for transmitting
data
over
networks.
Even
network
operating systems

that have their own protocols, such as
Netware,
also
support
TCP/IP.


Defining
a Cluster in Windows 2000


A
cluster is a group of independent computers that work together to run
a common set of applications and provide the image of a single system
to the client and application. The computers are physically connected
by cables and programmatically connected by cluster software. These
connections allow computers to use failover and load balancing, which
is not possible with a stand-alone computer.


Windows
2000 clustering technology provides high availability, scalability,
and manageability:




  • High availability.
    The cluster is designed to avoid a single point of failure.
    Applications can be distributed over more than one computer,
    achieving a degree of parallelism and failure recovery, and
    providing more availability.


  • Scalability.
    You can increase the cluster's computing power by adding more
    processors or computers.


  • Manageability.
    The cluster appears as a single-system image to end users,
    applications, and the network, while providing a single point of
    control to administrators. This single point of control can be
    remote.














Two
Types of Clusters in Windows 2000


In
the Windows 2000 Advanced Server and Datacenter Server operating
systems, Microsoft introduces two clustering technologies that can be
used independently or in combination, providing organizations with a
complete set of clustered solutions that can be selected based on the
requirements of a given application or service. Windows clustering
technologies include:




  • Cluster service. This
    service is intended primarily to provide failover support for
    applications such as databases, messaging systems, and file/print
    services. Cluster service supports 2-node failover clusters in
    Windows 2000 Advanced Server and 4-node clusters in Datacenter
    Server. Cluster service is ideal for ensuring the availability of
    critical line-of-business and other back-end systems, such as
    Microsoft Exchange Server or a Microsoft SQL Server

    7.0 database acting as a data store for an e-commerce Web site.


  • Network
    Load Balancing (NLB).
    This
    service load balances incoming IP (Internet Protocol) traffic across
    clusters of up to 32 nodes. Network Load Balancing enhances both the
    availability and scalability of Internet server-based programs such
    as Web servers, streaming media servers, and Terminal Services. By
    acting as the load balancing infrastructure and providing control
    information to management applications built on top of Windows
    Management Instrumentation (WMI), Network Load Balancing can
    seamlessly integrate into existing Web server farm infrastructures.
    Network Load Balancing will also serve as an ideal load balancing
    architecture for use with the Microsoft release of the upcoming
    Application Center in distributed Web farm environments.



Network
Related Questions.





What
is hub?





A.
A concentrator that joins multiple clients by means of a single link
to the rest of the LAN. A hub has several ports to which clients are
connected directly, and one or more ports that can be used to connect
the hub to the backbone or to other active network components. A hub
functions as a multiport repeater; signals received on any port are
immediately retransmitted to all other ports of the hub. Hubs
function at the physical layer of the OSI Reference Model





What
is switch?





A.
In networking, a switch is a small device that joins multiple
computers together at a low-level network protocol layer.
Technically, network switches operate at Layer Two (Data Link Layer)
of the OSI model.





Difference
Between Hub Switch?





A.
. Technically speaking, hubs operate using a
broadcast
model and switches operate using a
virtual
circuit

model. When four computers are connected to a hub, for example, and
two of those computers communicate with each other, hubs simply pass
through all network traffic to each of the four computers. Switches,
on the other hand, are capable of determining the destination of each
individual traffic element (such as an Ethernet frame) and
selectively forwarding data to the one computer that actually needs
it. By generating less network traffic in delivering messages, a
switch performs better than a hub on busy networks.





What
is Router?





A.
A
device that determines the next network point to which a data packet
should be forwarded enroute toward its destination. The router is
connected to at least two networks and determines which way to send
each data packet based on its current understanding of the state of
the networks it is connected to. Routers create or maintain a table
of the available routes and use this information to determine the
best route for a given data packet..





What
is Network Bridge?





A.
A bridge device filters data traffic at a network boundary. Bridges
reduce the amount of traffic on a LAN by dividing it into two
segments.


Bridges
operate at the data link layer (Layer 2) of the OSI model. Bridges
inspect incoming traffic and decide whether to forward or discard it.
An Ethernet bridge, for example, inspects each incoming Ethernet
frame - including the source and destination MAC addresses, and
sometimes the frame size - in making individual forwarding decisions.


Bridges
serve a similar function as switches, that also operate at Layer 2.
Traditional bridges, though, support one network boundary, whereas
switches usually offer four or more hardware ports. Switches are
sometimes called "multi-port bridges" for this reason.





What
is Mac Address?





A.
The MAC address is a number used by network adapters to uniquely
identify themselves on a LAN. MAC addresses are 12-digit hexadecimal
numbers. MAC addresses work at the data link layer of OSI and map to
IP addresses through an address resolution port.





What
is subnet ?





A.
A subnet is a logical grouping of connected network devices. When a
subnet is properly implemented, both the performance and security of
networks can be improved.


OR


















A
portion of a
network
that shares a common address component. On TCP/IP networks, subnets
are defined as all devices whose IP addresses have the same prefix.
For example, all devices with IP addresses that start with
100.100.100. would be part of the same subnet. Dividing a network
into subnets is useful for both security and performance reasons. IP
networks are divided using a subnet mask
.





What
is TCP/IP ?





A.
Transmission Control Protocol/Internet Protocol is a combined set of
protocols that performs the transfer of data between two computers.
TCP monitors and ensures correct transfer of data. IP receives the
data from TCP, breaks it up into packets, and ships it off to a
network within the Internet. TCP/IP is also used as a name for a
protocol suite that incorporates these functions and others








Mother
Board Related Question





What
is Bus?





A.
)
A collection of wires through which data is transmitted from one part
of a computer to another. You can think of a bus as a highway on
which data travels within a
computer.
When used in reference to personal computers, the term
bus
usually refers to
internal
bus
.
This is a bus that connects all the internal
computer
components

to the
CPU
and main memory. There's also an expansion bus that enables expansion
boards to access the
CPU
and memory.


All
buses consist of two parts -- an
address
bus

and a data bus. The data bus transfers actual data whereas the
address bus transfers information about where the data should go.


The
size of a bus, known as its
width,
is important because it determines how much data can be transmitted
at one time. For example, a 16-
bit
bus can transmit 16 bits of data, whereas a 32-bit bus can transmit
32 bits of data.


Every
bus has a
clock
speed

measured in
MHz.
A fast bus allows data to be transferred faster, which makes
applications
run
faster. On
PCs,
the old
ISA
bus

is being replaced by faster
buses
such as
PCI.


Nearly
all PCs made today include a local bus for data that requires
especially fast transfer speeds, such as video data. The local bus is
a high-speed pathway that connects directly to the processor.


Several
different types of buses are used on Apple Macintosh computers. Older
Macs use a bus called
NuBus,
but newer ones use
PCI.





















READ MORE - All the interview questions together

All the interview questions together

What is a Domain and Workgroup? Highlight advtgs and disadvtgs.




Domain: “A
domain is a group of computers and devices on a network that are
administered as a unit with common rules and procedures. Within the
Internet, domains are defined by the IP address. All devices sharing
a common part of the IP address are said to be in the same domain.”
-
www.murdoch.edu.au/cwisad/glossary.html

There
is no real limit to the amount of computers on a domain, it is common
to see domains with over 2000 computers/devices (Nodes) in it. For
networks with that many workstation, you will need enterprise level
software such as SMS, Exchange etc. to effectively manage it. If you
are using Windows XP as an OS... ONLY Windows XP Pro is capable of
operating in a Domain environment. You can mix OS clients on a
domain, you can have Macintosh, Windows, Linux, Unix all under the
same domain sharing resources as needed.

A domain usually
costs more money to setup because there is more hardware and software
required (Such as a Domain Controller and a Server Level OS) to get
it configured properly.

In a domain, all the machines have
domain
level

admin accounts on the local administrator group. What this means is,
you can effectively manage any and all of the computers on the domain
as long as your user account is a member of the Domain Admin
group.

Workgroup:
Workgroup
computing occurs when all the individuals have computers connected to
a network (a group of two or more computer systems linked together)
that allows them to send e-mail to one another, share data files, and
other resources such as printers. Normally, a workgroup is limited to
10 network devices/computers. Also, both Windows XP Pro and Home can
function in a workgroup environment.
Your typical "out of
box" system is setup to be used on a workgroup.
If you want,
you can change the network type from workgroup to domain and
viceversa. Machines setup in a Domain environment are much easier to
manage than workgroups when it comes to network resources (Shared
Files, Shared Printers, etc.)
Since workgroup machines might have
different account names, you really have to know the admin acccount
for each specific machine in order to effectively manage the
workgroup.





What
are the Different types of RAID?




  1. What does RAID stand
    for ?



In
1987, Patterson, Gibson and Katz at the University of California
Berkeley, published a paper entitled "A Case for Redundant
Arrays of Inexpensive Disks (RAID)" . This paper described
various types of disk arrays, referred to by the acronym RAID. The
basic idea of RAID was to combine multiple small, inexpensive disk
drives into an array of disk drives which yields performance
exceeding that of a Single Large Expensive Drive (SLED).
Additionally, this array of drives appears to the computer as a
single logical storage unit or drive.


The
Mean Time Between Failure (MTBF) of the array will be equal to the
MTBF of an individual drive, divided by the number of drives in the
array. Because of this, the MTBF of an array of drives would be too
low for many application requirements. However, disk arrays can be
made fault-tolerant by redundantly storing information in various
ways.


Five
types of array architectures, RAID-1 through RAID-5, were defined by
the Berkeley paper, each providing disk fault-tolerance and each
offering different trade-offs in features and performance. In
addition to these five redundant array architectures, it has become
popular to refer to a non-redundant array of disk drives as a RAID-0
array.




  1. Data Striping



Fundamental
to RAID is "striping", a method of concatenating multiple
drives into one logical storage unit. Striping involves partitioning
each drive's storage space into stripes which may be as small as one
sector (512 bytes) or as large as several megabytes. These stripes
are then interleaved round-robin, so that the combined space is
composed alternately of stripes from each drive. In effect, the
storage space of the drives is shuffled like a deck of cards. The
type of application environment, I/O or data intensive, determines
whether large or small stripes should be used.


Most
multi-user operating systems today, like NT, Unix and Netware,
support overlapped disk I/O operations across multiple drives.
However, in order to maximize throughput for the disk subsystem, the
I/O load must be balanced across all the drives so that each drive
can be kept busy as much as possible. In a multiple drive system
without striping, the disk I/O load is never perfectly balanced. Some
drives will contain data files which are frequently accessed and some
drives will only rarely be accessed. In I/O intensive environments,
performance is optimized by striping the drives in the array with
stripes large enough so that each record potentially falls entirely
within one stripe. This ensures that the data and I/O will be evenly
distributed across the array, allowing each drive to work on a
different I/O operation, and thus maximize the number of simultaneous
I/O operations which can be performed by the array.


In
data intensive environments and single-user systems which access
large records, small stripes (typically one 512-byte sector in
length) can be used so that each record will span across all the
drives in the array, each drive storing part of the data from the
record. This causes long record accesses to be performed faster,
since the data transfer occurs in parallel on multiple drives.
Unfortunately, small stripes rule out multiple overlapped I/O
operations, since each I/O will typically involve all drives.
However, operating systems like DOS which do not allow overlapped
disk I/O, will not be negatively impacted. Applications such as
on-demand video/audio, medical imaging and data acquisition, which
utilize long record accesses, will achieve optimum performance with
small stripe arrays.


A
potential drawback to using small stripes is that synchronized
spindle drives are required in order to keep performance from being
degraded when short records are accessed. Without synchronized
spindles, each drive in the array will be at different random
rotational positions. Since an I/O cannot be completed until every
drive has accessed its part of the record, the drive which takes the
longest will determine when the I/O completes. The more drives in the
array, the more the average access time for the array approaches the
worst case single-drive access time. Synchronized spindles assure
that every drive in the array reaches its data at the same time. The
access time of the array will thus be equal to the average access
time of a single drive rather than approaching the worst case access
time.




  1. The different RAID
    levels




RAID-0


RAID
Level 0 is not redundant, hence does not truly fit the "RAID"
acronym. In level 0, data is split across drives, resulting in higher
data throughput. Since no redundant information is stored,
performance is very good, but the failure of any disk in the array
results in data loss. This level is commonly referred to as striping.


RAID-1


RAID
Level 1 provides redundancy by writing all data to two or more
drives. The performance of a level 1 array tends to be faster on
reads and slower on writes compared to a single drive, but if either
drive fails, no data is lost. This is a good entry-level redundant
system, since only two drives are required; however, since one drive
is used to store a duplicate of the data, the cost per megabyte is
high. This level is commonly referred to as mirroring.


RAID-2


RAID
Level 2, which uses Hamming error correction codes, is intended for
use with drives which do not have built-in error detection. All SCSI
drives support built-in error detection, so this level is of little
use when using SCSI drives.


RAID-3


RAID
Level 3 stripes data at a byte level across several drives, with
parity stored on one drive. It is otherwise similar to level 4.
Byte-level striping requires hardware support for efficient use.


RAID-4


RAID
Level 4 stripes data at a block level across several drives, with
parity stored on one drive. The parity information allows recovery
from the failure of any single drive. The performance of a level 4
array is very good for reads (the same as level 0). Writes, however,
require that parity data be updated each time. This slows small
random writes, in particular, though large writes or sequential
writes are fairly fast. Because only one drive in the array stores
redundant data, the cost per megabyte of a level 4 array can be
fairly low.


RAID-5


RAID
Level 5 is similar to level 4, but distributes parity among the
drives. This can speed small writes in multiprocessing systems, since
the parity disk does not become a bottleneck. Because parity data
must be skipped on each drive during reads, however, the performance
for reads tends to be considerably lower than a level 4 array. The
cost per megabyte is the same as for level 4.


Summary:





    • RAID-0 is the fastest
      and most efficient array type but offers no fault-tolerance.


    • RAID-1
      is the array of choice for performance-critical, fault-tolerant
      environments. In addition, RAID-1 is the only choice for
      fault-tolerance if no more than two drives are desired.


    • RAID-2
      is seldom used today since ECC is embedded in almost all modern
      disk drives.


    • RAID-3
      can be used in data intensive or single-user environments which
      access long sequential records to speed up data transfer. However,
      RAID-3 does not allow multiple I/O operations to be overlapped and
      requires synchronized-spindle drives in order to avoid performance
      degradation with short records.


    • RAID-4
      offers no advantages over RAID-5 and does not support multiple
      simultaneous write operations.


    • RAID-5
      is the best choice in multi-user environments which are not write
      performance sensitive. However, at least three, and more typically
      five drives are required for RAID-5 arrays.



  1. Possible
    aproaches to RAID


  2. Hardware
    RAID
    The hardware based system manages the RAID subsystem
    independently from the host and presents to the host only a single
    disk per RAID array. This way the host doesn't have to be aware of
    the RAID subsystems(s).


  3. The
    controller based hardware solution
    DPT's SCSI controllers are a
    good example for a controller based RAID solution.
    The
    intelligent contoller manages the RAID subsystem independently from
    the host. The advantage over an external SCSI---SCSI RAID subsystem
    is that the contoller is able to span the RAID subsystem over
    multiple SCSI channels and and by this remove the limiting factor
    external RAID solutions have: The transfer rate over the SCSI bus.


  4. The
    external hardware solution (SCSI---SCSI RAID)
    An external RAID
    box moves all RAID handling "intelligence" into a
    contoller that is sitting in the external disk subsystem. The whole
    subsystem is connected to the host via a normal SCSI controller and
    apears to the host as a single or multiple disks.
    This solution
    has drawbacks compared to the contoller based solution: The single
    SCSI channel used in this solution creates a bottleneck.
    Newer
    technologies like Fiber Channel can ease this problem, especially if
    they allow to trunk multiple channels into a Storage Area Network.
    4
    SCSI drives can already completely flood a parallel SCSI bus, since
    the average transfer size is around 4KB and the command transfer
    overhead - which is even in Ultra SCSI still done asynchonously -
    takes most of the bus time.



    • Software
      RAID



      • The
        MD driver in the Linux kernel is an example of a RAID solution
        that is completely hardware independent.
        The Linux MD driver
        supports currently RAID levels 0/1/4/5 + linear mode.


      • Under
        Solaris you have the Solstice DiskSuite and Veritas Volume Manager
        which offer RAID-0/1 and 5.


      • Adaptecs
        AAA-RAID controllers are another example, they have no RAID
        functionality whatsoever on the controller, they depend on
        external drivers to provide all external RAID functionality.
        They
        are basically only multiple single AHA2940 controllers which have
        been integrated on one card. Linux detects them as AHA2940 and
        treats them accordingly.
        Every OS needs its own special driver
        for this type of RAID solution, this is error prone and not very
        compatible.



    • Hardware
      vs. Software RAID
      Just like any other application,
      software-based arrays occupy host system memory, consume CPU cycles
      and are operating system dependent. By contending with other
      applications that are running concurrently for host CPU cycles and
      memory, software-based arrays degrade overall server performance.
      Also, unlike hardware-based arrays, the performance of a
      software-based array is directly dependent on server CPU
      performance and load.





Except for the array
functionality, hardware-based RAID schemes have very little in common
with software-based implementations. Since the host CPU can execute
user applications while the array adapter's processor simultaneously
executes the array functions, the result is true hardware
multi-tasking. Hardware arrays also do not occupy any host system
memory, nor are they operating system dependent.



Hardware arrays are also
highly fault tolerant. Since the array logic is based in hardware,
software is NOT required to boot. Some software arrays, however, will
fail to boot if the boot drive in the array fails. For example, an
array implemented in software can only be functional when the array
software has been read from the disks and is memory-resident. What
happens if the server can't load the array software because the disk
that contains the fault tolerant software has failed? Software-based
implementations commonly require a separate boot drive, which is NOT
included in the array.


What
is NAT?





Short
for
Network
Address
Translation,
an
Internet
standard that enables a
local-area
network (LAN)

to use one set of
IP
addresses

for internal traffic and a second set of addresses for external
traffic. A
NAT
box

located where the LAN meets the Internet makes all necessary IP
address translations.


NAT
serves three main purposes:


Provides
a type of
firewall
by hiding internal IP addresses


Enables
a company to use more internal IP addresses. Since they're used
internally only, there's no possibility of conflict with IP addresses
used by other companies and organizations.


Allows
a company to combine multiple ISDN connections into a single Internet
connection.


Also
see
dynamic
NAT

and
static
NAT
.






Backup
Procedures: The Different Types of Backup


Related
links:
Backup
University

Frequently
Asked Questions

Whitepapers


Full
Backup:

A
Full backup is simply backing up all files on the system. Users may
choose to update archive attributes if they plan on doing any of the
following 2 types of partial backups.


Incremental
Backup:

An
incremental backup is a backup that backs up only the files modified
since the last backup. When running an incremental backup, users need
to update the archive attribute while backing up only modified files.
Often the incremental backups are appended to the full backup set.
The result is a tape with the changes that occurred daily. This type
of backup is useful if the user wishes to have an audit trail of file
usage activity on their system and will enable them to restore a
specific days work without restoring any changes made since that
point in time. To do a full restore for 4 days after a full backup
they must restore the full backup and all 4 data sets after it.
Unlike the next type of backup.


Differential
Backup:

A
differential backup is a cumulative backup of changes made since the
last full backup. It backs up modified files only but does not update
the archive attribute. The list of files grows each day until the
next full backup is performed clearing the archive attributes. This
enables the user to restore all files changed since the last full
backup in one pass. These backups can be appended to the full as
well, but they will have to keep in mind that each set can contain a
different version of a file if that file changes daily. The data sets
will always be at least as big as the previous differential (if no
changes were made) and will continue to grow as files change. Once a
files archive attribute is set it will be backed up each day until
after the full backup resets it's attribute bit.


What
is TCP/IP?


Transmission
Control
Protocol/Internet
Protocol,
the suite of
communications
protocols

used to connect
hosts
on the
Internet.
TCP/IP uses several
protocols,
the two main ones being
TCP
and
IP.
TCP/IP is built into the
UNIX
operating
system

and is used by the Internet, making it the
de
facto standard

for transmitting
data
over
networks.
Even
network
operating systems

that have their own protocols, such as
Netware,
also
support
TCP/IP.


Defining
a Cluster in Windows 2000


A
cluster is a group of independent computers that work together to run
a common set of applications and provide the image of a single system
to the client and application. The computers are physically connected
by cables and programmatically connected by cluster software. These
connections allow computers to use failover and load balancing, which
is not possible with a stand-alone computer.


Windows
2000 clustering technology provides high availability, scalability,
and manageability:




  • High availability.
    The cluster is designed to avoid a single point of failure.
    Applications can be distributed over more than one computer,
    achieving a degree of parallelism and failure recovery, and
    providing more availability.


  • Scalability.
    You can increase the cluster's computing power by adding more
    processors or computers.


  • Manageability.
    The cluster appears as a single-system image to end users,
    applications, and the network, while providing a single point of
    control to administrators. This single point of control can be
    remote.














Two
Types of Clusters in Windows 2000


In
the Windows 2000 Advanced Server and Datacenter Server operating
systems, Microsoft introduces two clustering technologies that can be
used independently or in combination, providing organizations with a
complete set of clustered solutions that can be selected based on the
requirements of a given application or service. Windows clustering
technologies include:




  • Cluster service. This
    service is intended primarily to provide failover support for
    applications such as databases, messaging systems, and file/print
    services. Cluster service supports 2-node failover clusters in
    Windows 2000 Advanced Server and 4-node clusters in Datacenter
    Server. Cluster service is ideal for ensuring the availability of
    critical line-of-business and other back-end systems, such as
    Microsoft Exchange Server or a Microsoft SQL Server

    7.0 database acting as a data store for an e-commerce Web site.


  • Network
    Load Balancing (NLB).
    This
    service load balances incoming IP (Internet Protocol) traffic across
    clusters of up to 32 nodes. Network Load Balancing enhances both the
    availability and scalability of Internet server-based programs such
    as Web servers, streaming media servers, and Terminal Services. By
    acting as the load balancing infrastructure and providing control
    information to management applications built on top of Windows
    Management Instrumentation (WMI), Network Load Balancing can
    seamlessly integrate into existing Web server farm infrastructures.
    Network Load Balancing will also serve as an ideal load balancing
    architecture for use with the Microsoft release of the upcoming
    Application Center in distributed Web farm environments.



Network
Related Questions.





What
is hub?





A.
A concentrator that joins multiple clients by means of a single link
to the rest of the LAN. A hub has several ports to which clients are
connected directly, and one or more ports that can be used to connect
the hub to the backbone or to other active network components. A hub
functions as a multiport repeater; signals received on any port are
immediately retransmitted to all other ports of the hub. Hubs
function at the physical layer of the OSI Reference Model





What
is switch?





A.
In networking, a switch is a small device that joins multiple
computers together at a low-level network protocol layer.
Technically, network switches operate at Layer Two (Data Link Layer)
of the OSI model.





Difference
Between Hub Switch?





A.
. Technically speaking, hubs operate using a
broadcast
model and switches operate using a
virtual
circuit

model. When four computers are connected to a hub, for example, and
two of those computers communicate with each other, hubs simply pass
through all network traffic to each of the four computers. Switches,
on the other hand, are capable of determining the destination of each
individual traffic element (such as an Ethernet frame) and
selectively forwarding data to the one computer that actually needs
it. By generating less network traffic in delivering messages, a
switch performs better than a hub on busy networks.





What
is Router?





A.
A
device that determines the next network point to which a data packet
should be forwarded enroute toward its destination. The router is
connected to at least two networks and determines which way to send
each data packet based on its current understanding of the state of
the networks it is connected to. Routers create or maintain a table
of the available routes and use this information to determine the
best route for a given data packet..





What
is Network Bridge?





A.
A bridge device filters data traffic at a network boundary. Bridges
reduce the amount of traffic on a LAN by dividing it into two
segments.


Bridges
operate at the data link layer (Layer 2) of the OSI model. Bridges
inspect incoming traffic and decide whether to forward or discard it.
An Ethernet bridge, for example, inspects each incoming Ethernet
frame - including the source and destination MAC addresses, and
sometimes the frame size - in making individual forwarding decisions.


Bridges
serve a similar function as switches, that also operate at Layer 2.
Traditional bridges, though, support one network boundary, whereas
switches usually offer four or more hardware ports. Switches are
sometimes called "multi-port bridges" for this reason.





What
is Mac Address?





A.
The MAC address is a number used by network adapters to uniquely
identify themselves on a LAN. MAC addresses are 12-digit hexadecimal
numbers. MAC addresses work at the data link layer of OSI and map to
IP addresses through an address resolution port.





What
is subnet ?





A.
A subnet is a logical grouping of connected network devices. When a
subnet is properly implemented, both the performance and security of
networks can be improved.


OR


















A
portion of a
network
that shares a common address component. On TCP/IP networks, subnets
are defined as all devices whose IP addresses have the same prefix.
For example, all devices with IP addresses that start with
100.100.100. would be part of the same subnet. Dividing a network
into subnets is useful for both security and performance reasons. IP
networks are divided using a subnet mask
.





What
is TCP/IP ?





A.
Transmission Control Protocol/Internet Protocol is a combined set of
protocols that performs the transfer of data between two computers.
TCP monitors and ensures correct transfer of data. IP receives the
data from TCP, breaks it up into packets, and ships it off to a
network within the Internet. TCP/IP is also used as a name for a
protocol suite that incorporates these functions and others








Mother
Board Related Question





What
is Bus?





A.
)
A collection of wires through which data is transmitted from one part
of a computer to another. You can think of a bus as a highway on
which data travels within a
computer.
When used in reference to personal computers, the term
bus
usually refers to
internal
bus
.
This is a bus that connects all the internal
computer
components

to the
CPU
and main memory. There's also an expansion bus that enables expansion
boards to access the
CPU
and memory.


All
buses consist of two parts -- an
address
bus

and a data bus. The data bus transfers actual data whereas the
address bus transfers information about where the data should go.


The
size of a bus, known as its
width,
is important because it determines how much data can be transmitted
at one time. For example, a 16-
bit
bus can transmit 16 bits of data, whereas a 32-bit bus can transmit
32 bits of data.


Every
bus has a
clock
speed

measured in
MHz.
A fast bus allows data to be transferred faster, which makes
applications
run
faster. On
PCs,
the old
ISA
bus

is being replaced by faster
buses
such as
PCI.


Nearly
all PCs made today include a local bus for data that requires
especially fast transfer speeds, such as video data. The local bus is
a high-speed pathway that connects directly to the processor.


Several
different types of buses are used on Apple Macintosh computers. Older
Macs use a bus called
NuBus,
but newer ones use
PCI.





















READ MORE - All the interview questions together

 
 
 

Popular Posts