Best IT Documents.com Blog


Caring for Archives

Posted in Compliances (1300) by Guest on the May 31st, 2010

Physical maintenance of the records

All metal paper clips, rusting staples, and rubber bands should be removed.

Documents should be in containers that prevent dust from entering

Large items should be stored flat.

The ideal storage area for records:

Amenable to consistent environmental control (temperature and humidity)

No water pipes running nearby

Little or no natural light

Why does paper deteriorate?

Wood pulp = acid content = slow burn

Any paper manufactured since the mid-19th century, unless it is of the type designated permanent/durable or acid-free, has an expected useful life of less than fifty years.

What is the best defense against paper deterioration?

Environmental controls

A chemical reaction is taking place in acidic paper, and this reaction is accelerated by high temperatures and high humidity

Ideal temperature: 60-68 degrees F

Ideal relative humidity level: 40-60%

If ideal conditions cannot be reached, try to maintain

CONSISTENT conditions

Preservation common sense:

Some records are valuable as physical artifacts while others are valuable primarily for the information they contain.

For some deteriorating items, photo-copying them onto acid-free paper and discarding the originals makes more sense than spending money to deacidify, repair, or encapsulate them.

Optical scanning and digitization are the most stable way to preserve records

  

Repairing materials:

NEVER use cellophane tape

Get some basic supplies:

archival repair tape

wipe cloths

acid free paper

  

Special needs for photographs

1) Never label photographs on their reverse with ballpoint pen. The ink may bleed through to the front. Reference numbers on mounts should be written discreetly in light-resistant ink. Reference numbers on the back of photographs that have not been mounted can be written with a soft pencil that leaves a clear mark.

2) If possible, put photographs in chemically stable polyester or paper sleeves (e.g., made of a material such as Mylar, or acid-free paper.) Such sleeves help prevent curling of photographs and reduce physical contact with the photos. It is also possible to label the sleeves with identifying information or to insert a separate written label inside the sleeve.

3) If it is not feasible for you to use sleeves, be sure to store the photographs in such a way that they will not curl over time and will not be subject to excessive handling.

4) Photographs should be handled with cotton gloves, or held by the edges to avoid skin contact with the image.

5) Photographs are very susceptible to water damage and should not be stored near sources of water. If you ever have a flood situation in the archives, be sure to rescue the photographs first.

6) Photographs are susceptible to insect damage, so may be best stored in a metal container if insects are likely to be a major problem.

7) Photographs should not be scanned or photocopied repeatedly.

  

Special needs for films and videos

Be aware of the dangers of nitrate film

Make a video cassette use copy for films;

Store videos upright with tape on bottom.

Rewind films and videos periodically

  

Electronic records:

The conservative stance for a repository to take regarding electronic records is to require that all records be deposited in hard copy.

This stance will be increasingly untenable as organizations and individuals wholeheartedly enter the electronic age.

Even now, there is a danger in requesting hard copy printouts of records to be saved. The extra steps of selecting and printing records to be saved will inevitably limit the number and variety of records saved.

  

Basic strategies for preserving electronic data:

Medium refreshing: copying data from one physical carrier to another of the same type, e.g. backing up a hard drive, diskette, or CD ROM.

Medium conversion: transferring electronic data from one medium to another – this might mean transferring to a non-digital medium.

High quality acid neutral paper can last a century or longer and archival quality microfilm is projected to last 300 years or more. Paper and microfilm have the additional advantage of requiring no special hardware or software for retrieval or viewing

Format conversion: converting the data format in order to reduce the number of different formats being used in a particular setting, e.g. converting WordPerfect word processing files to a Word format.

Migration: converting the data so that it can operate with different hardware and software than originally intended. This could involve transferring data to a central server or computer housed in the archives.

The most important thing that an archivist can do at this point is to work with those generating the records to raise their consciousness about the problems involved in preserving electronic data. If records are received in electronic format, repositories may need to reformat them at intervals to avoid obsolescent formats and the need for obsolete hardware.

A schedule should be put in place, and a particular person made responsible, to intentionally verify at specific intervals that the following types of electronic data are still readable:

Email
Word processing and web documents
Databases.

Disaster preparedness

A disaster plan in the event of fire or flood should be an integral part of any repository’s program.

It is important to have the plan in written form because of potential chaos and confusion at the height of the emergency

If there should be water damage, it is best to rescue photographs, microfilm, and any materials with coated paper first.

http://bestitdocuments.com/Services.html

 

Comments Off on Caring for Archives

The Loss of Corporate Knowledge

Posted in Business (600),Security (1500) by Guest on the May 31st, 2010

The challenges are related to people and strategy:

Attracting & Retaining Talented People                                                      9%

Identifying the Right Team/Leader for Knowledge                              15%

Defining Standard Processes for Knowledge Work                                 24%

Setting the Appropriate Scope for Knowledge Initiatives                 24%

Mapping the Organizations Existing Knowledge                                     28%

Justifying the Use of Scarce Resource for Knowledge Initiatives    34%

Determining What Knowledge Should be Managed                               40%

Measuring the Value and Performance of Knowledge Assets           43%

Changing People’s Behavior                                                                            56%

 

People contribute to knowledge bases

Process – Embedded in core processes

Content – Consistent with strategy

Technology – Just-in-time’ delivery

People contribute to knowledge bases

    • Process – Embedded in core processes
    • Content – Consistent with strategy
    • Technology – Just-in-time’ delivery
Comments Off on The Loss of Corporate Knowledge

Management and Support Planning

Posted in Data Center - SOC - NOC by Guest on the May 30th, 2010

Planning Fundamentals

Understand: “To perceive and comprehend the nature and significance of”

Communicate: “The exchange of thoughts”

Involve: “To contain or include”

Document: Take notes, write everything down  

Technical Architecture     

Establish a Technical Architecture and use it!!!

TA is the hardware, operating system, applications software, transmission medium, and methodology for an information platform.

Standards Based

Security & Disaster Recovery  

Implement and follow standards

Visual standards

Development standards

Documentation standards

Page – file – directory standards

Security

Backup and disaster recovery standards  

Document – “Write it Down”

Reduce revisits

Develop a TO DO List

Self documents the project process

Reminds you what you did months later

Reduces lost ideas  

Keep current and train staff to:

Reduce errors

Reduce stress

Reduce delivery time

Reduce life cycle cost of application  

Use outside expertise to:

Plan new projects

Address areas that you do not know well

Do implementations that you will only manage

Test security

Use a life-cycle approach

Use tools that provide real benefit

Monitor system performance to ensure stability and acceptable response time  

Review and test security

Support Summary

Systems Approach

Planning Fundamentals

Technical Architecture

Standards

Document

Life-Cycle Approach

Security

http://bestitdocuments.com/Services.html

Comments Off on Management and Support Planning

Common Sense Identify Theft

Posted in Compliances (1300) by Guest on the May 30th, 2010

Identify Theft

With very little information, the criminal can financially drain bank accounts and charge an enormous amount of debt. Identity theft is a growing problem in the United States today. Identity theft occurs when an unauthorized person uses another individual’s personal data and assumes that person’s identity in making financial transactions. In order to commit identity theft, a person somehow gains access to another person’s identification, such as a driver’s license or Social Security card, credit card accounts, and/or bank account information.  

Here are some general guidelines for protecting yourself from identity theft:

Do Not Give Your Social Security Number Or Driver’s License Number To Anyone Unless An Organization Or Business Has A Legal Right To Request That Information.

Safeguard Your Checkbook And Identification When Making Purchases At Stores.

Avoid Providing Your Birthdate And Your Mother’s Maiden Name, Unless Required By Law. (Your Mother’s Maiden Name Is Often The Keyword To Gaining Entry To Credit Card Accounts Via The Telephone.)

Avoid Providing Too Much Personal Information On Warranty Cards, Registration Cards, Etc.

Check Your Bank And Credit Card Statements Very Carefully. Report Any Discrepancies Immediately To The Respective Financial Institution.

Avoid Making Online Purchases From Obscure Organizations On The Internet. Research The Organization Before Making A Credit-Card Purchase.

Do Not Give Your Credit Card Number To Internet Auction Sellers. Use A Money Order, Cashier’s Check, Or An Intermediary Financial Organization, Such As Paypal, To Pay For Online Purchases.

Keep Your Credit Card Receipts To Compare To Your Monthly Statements. When You No Longer Need These Receipts, Shred Them; Do Not Throw Them Away In Complete Form.

Shred All Unwanted “Junk Mail” From Financial Organizations That Offer Credit Cards.

http://bestitdocuments.com/Services.html

Comments Off on Common Sense Identify Theft

TGIS – Sample Engineering Design / Development Considerations

Posted in Business (600) by Guest on the May 30th, 2010

Route Design and Seasonal Field Activity Cycles

Route Design & Analysis – Happens continuously, but there are critical points where data and map products are required in a timely manner to support the following activities.

Thaw Settlement Calculations

Frost Heave Calculations

Pipeline Design Criteria

Stress Analysis for Frost Heave

Stress Analysis for Thaw Settlement

Thermal-Hydraulic Modeling

Facility and Road Design

River Crossing Design

Landform Engineering Properties

Material Site Selection and Access

Geo-hazards Studies

Compressor Stations

Pipeline Construction Support & Logistics

Pipeline Design Characterizations

… and many others

  

Seasonal Field Activities – There are two field seasons, Winter and Summer. Each requires data and map products in a timely manner to support engineering and sub-contracting personnel that work in the field.

Soil Temperature Monitoring

Soil Surveys

Fault Studies

Landslide Studies

Hydrology Studies

Geotechnical Boring

Stream Crossing Studies

Thermal-Hydraulic Model Testing

Landform Ground Truthing

Routing Studies

http://bestitdocuments.com/Services.html

 

Comments Off on TGIS – Sample Engineering Design / Development Considerations

Introducing Firewalls

Posted in Firewalls (75),Security (1500) by Guest on the May 29th, 2010

Firewall Advantages and Limitations

Now that the theory behind a firewall has been presented, this section will focus on examining the several kinds of firewalls available as well as highlighting the kind of protection they can offer. The position that a firewall sits with respect to the rest of the network restricts entry to the system to a single, carefully controlled point. This point is usually where the internal network connects to the Internet. This then allows the firewall to act as a choke point that provides a significant amount of leverage over controlling the amount, and kinds of traffic that will pass to the internal network. As was mentioned in passing earlier, it is now obvious that a firewall can be seen as a method of preventing attackers from getting close to your network’s other defences present at the host level. A firewall will limit the systems exposure to potential threats as well as provide an efficient place from which to log Internet activity. Keep in mind that no security model protects against every possible attack, but aims to make break-ins rare, brief and inexpensive.

As well as understanding what a firewall can do, it is equally as important to understand what a firewall cannot do. No matter what kind of firewall is being considered, all of the below limitations are present to some degree.

1. A firewall will provide no protection against malicious insiders. Once an attacker is inside the firewall, it can do very little to protect you.

2. A firewall cannot protect against connections that don’t go through it. To obtain the best protection from a firewall, all ways into the system must pass through a firewall. This implies that one site could choose to have any number of firewalls present.

3. Since a firewall is designed with today’s threats in mind, you can’t rely on it to protect you against completely new threats. The firewall must be kept up to date through regular maintenance activities.

4. A firewall can’t fully protect against viruses. A firewall could look at every single packet that enters the system, but they are not designed to detect whether a packet contains part of a valid email message, or part of a virus.

Another issue that must be brought to light when discussing the limitations of a firewall is the fact that it interferes with the Internet. Although this is more of an essential design issue as opposed to a limitation, it is true that a firewall interrupts the end-to-end communication model of the Internet. This can result in a decrease in speed, or even the introduction of all sorts of problems and annoying side effects. Integrating a firewall into a network where there previously was none, can be a difficult challenge to do transparently. 

Types of Firewalls

There are four basic kinds of firewalls in use today. The first of which are referred to as Packet Firewalls. These firewalls are usually present on a router and will effectively pass some packets and block others.

Each IP packet contains the source and destination address, the protocol (TCP, UDP or ICMP), the source and destination ports, the ICMP message size as well as the packet size. Some advantages to a Packet Firewall are:

Every network requires a router in order to connect to the Internet and so this is an attractive alternative for low budget organizations

A single screening router can protect an entire network

Simple packet filtering can be very efficient

Packet filtering is widely available

A Packet Firewall is not without its disadvantages though. The rules used to filter packets can be difficult to configure and test. The presence of a packet filter on a router can reduce its performance somewhat, but this is highly dependant on the make of the router. It is also not always possible to readily enforce a security policy by using just packet filtering on a single router. 

The second major type of firewall is known as a Traditional Proxy Based Firewall. All of the users on the system must use special procedures and network clients that are fully aware of the proxy. These proxies are specialized programs that take requests for Internet services and provide replacement connections and act as gateways to the service. There is some excellent software that is available for proxying.

There are several toolkits available that will either allow you to easily convert existing client / server applications into proxy based versions or provide you with a suite of proxy servers for common Internet protocols.

Proxy services have the following advantages:

they can be quite effective at logging, since they understand the application protocol and they can therefore only log the essential information which makes for more efficient, and smaller logs

they may also provide a form of caching, which can help to increase performance and reduce the load on network links

they can be configured to do much more intelligent filtering

since they are actively involved in the connection, they provide a place to do user level authentication

they automatically provide protection against deliberately malformed IP packets since the generate completely new IP packets to be delivered to the client

a single proxy machine can relay requests to the Internet for a number of other machines at once. The proxy machine is the only machine that requires a valid IP address, which makes proxying an easy way to economize on address space.

It can prove to be difficult to find proxy services that are as up to date as the same non-proxy service, since the development of the proxy can only begin once the new service is available. Finding proxy services for newer or less widely used services can also present a challenge. The services that a proxy provides may require different servers for each service. Setting up and configuring all of these servers can take a lot of time. One major disadvantage to proxy services is that the internal user is aware of the proxy, and documentation for applications that the user is trying to use is usually not written with the firewall in mind.

A packet rewriting firewall is the third major type of firewall and it attempts to solve the problems a firewall creates for the internal user by making the firewall transparent. It does this by taking the contents of inbound IP packets and rewriting them as they pass between the internal network and the Internet. From the outside all communications appear to be mediated through a proxy on the and from the inside it appears that each machine is talking directly to another host on the Internet.

Most proxy and packet rewriting based firewalls are effective only when they are used in conjunction with some way of controlling IP traffic between the internal clients and the servers on the Internet. Two of the most common hardware configurations used to accomplish this task are known as a screening router and a dual homed host. Both of these configurations provide a way to examine packets travelling in both directions and filter (or rewrite) them based on the sites security policy. A screening router and a dual homed host both sit between a network and the Internet. A screening router is effectively the same as a packet filtering router and a dual homed host is just a host with two NICs (Network Interface Card).

The last type of firewall to examine is known as a screen. This is another way of bisecting Ethernet traffic with a pair of interfaces, however in this case, the screen doesn’t have an IP address. It contains a complex set of rules on which it bases its decisions regarding which packets to forward to its other interface. The fact that it has no IP address makes it nearly transparent, and highly resilient to attacks over the network.

Firewalls are built with different combinations of the essential building blocks mentioned above. Using an additional two concepts provides a large number of alternate firewall architectures designed to suit any situation.

Firstly is the concept of a Bastion Host. This is a computer that represents an organizations public presence on the Internet. It is a highly secured machine that is accessible by everyone. This machine has been built and designed from the beginning to be configured as the most fortified host on the network due the fact that it is also the most exposed host on the network. It can be likened to the lobby of a building. Anyone can come in and ask questions to the people at the desk but they may not be permitted to go up the stairs or use the elevators to access the rest of the building.

The second concept is that of a subnet, or more precisely a screened subnet. This can basically be thought of as a group of computers that are all connected together on the same wire. The computers are all able to talk to one another locally, but all other connections must first pass through a router that is acting as a screen.

Now, combining these two concepts, with an exterior router, then a bastion host followed by an interior router, a perimeter network can be formed with this screened subnet architecture. This effectively places all of the machines that are most likely to be attacked together, and introduces another degree of separation between these more vulnerable machines and the rest of the internal network.

http://bestitdocuments.com/Services.html

 

Comments Off on Introducing Firewalls

What are the basic types of firewalls?

Posted in Firewalls (75),Security (1500) by Guest on the May 29th, 2010

Conceptually, there are two types of firewalls:

1. Network Level

2. Application Level

They are not as different as you might think, and latest technologies are blurring the distinction to the point where it’s no longer clear if either one is “better” or “worse.” As always, you need to be careful to pick the type that meets your needs.

Network level firewalls generally make their decisions based on the source, destination addresses and ports in individual IP packets. A simple router is the “traditional” network level firewall, since it is not able to make particularly sophisticated decisions about what a packet is actually talking to or where it actually came from. Modern network level firewalls have become increasingly sophisticated, and now maintain internal information about the state of connections passing through them, the contents of some of the data streams, and so on. One thing that’s an important distinction about many network level firewalls is that they route traffic directly though them, so to use one you usually need to have a validly assigned IP address block. Network level firewalls tend to be very fast and tend to be very transparent to users.

[Screened host firewall]

Example Network level firewall: In this example, a network level firewall called a “screened host firewall” is represented. In a screened host firewall, access to and from a single host is controlled by means of a router operating at a network level. The single host is a bastion host; a highly-defended and secured strong-point that (hopefully) can resist attack.

[Screened subnet firewall]

Example Network level firewall: In this example, a network level firewall called a “screened subnet firewall” is represented. In a screened subnet firewall, access to and from a whole network is controlled by means of a router operating at a network level. It is similar to a screened host, except that it is, effectively, a network of screened hosts.

Application level firewalls generally are hosts running proxy servers, which permit no traffic directly between networks, and which perform elaborate logging and auditing of traffic passing through them. Since the proxy applications are software components running on the firewall, it is a good place to do lots of logging and access control. Application level firewalls can be used as network address translators, since traffic goes in one “side” and out the other, after having passed through an application that effectively masks the origin of the initiating connection. Having an application in the way in some cases may impact performance and may make the firewall less transparent. Early application level firewalls such as those built using the TIS firewall toolkit, are not particularly transparent to end users and may require some training. Modern application level firewalls are often fully transparent. Application level firewalls tend to provide more detailed audit reports and tend to enforce more conservative security models than network level firewalls.

 

[Dual-Homed Gateway]

Example Application level firewall: In this example, an application level firewall called a “dual homed gateway” is represented. A dual homed gateway is a highly secured host that runs proxy software. It has two network interfaces, one on each network, and blocks all traffic passing through it.

The Future of firewalls lies someplace between network level firewalls and application level firewalls. It is likely that network level firewalls will become increasingly “aware” of the information going through them, and application level firewalls will become increasingly “low level” and transparent. The end result will be a fast packet-screening system that logs and audits data as it passes through. Increasingly, firewalls (network and application layer) incorporate encryption so that they may protect traffic passing between them over the Internet. Firewalls with end-to-end encryption can be used by organizations with multiple points of Internet connectivity to use the Internet as a “private backbone” without worrying about their data or passwords being sniffed.

http://bestitdocuments.com/Services.html

 

Comments Off on What are the basic types of firewalls?

Glossary of Firewall Related Terms

Posted in Firewalls (75),Security (1500) by Guest on the May 29th, 2010

Abuse of Privilege:

When a user performs an action that they should not have, according to organizational policy or law.

Access Control Lists:

Rules for packet filters (typically routers) that define which packets to pass and which to block.

Access Router:

A router that connects your network to the external Internet. Typically, this is your first line of defense against attackers from the outside Internet. By enabling access control lists on this router, you’ll be able to provide a level of protection for all of the hosts “behind” that router, effectively making that network a DMZ instead of an unprotected external LAN.

Application-Level Firewall:

A firewall system in which service is provided by processes that maintain complete TCP connection state and sequencing. Application level firewalls often re-address traffic so that outgoing traffic appears to have originated from the firewall, rather than the internal host.

Authentication:

The process of determining the identity of a user that is attempting to access a system.

 

Authentication Token:

A portable device used for authenticating a user. Authentication tokens operate by challenge/response, time-based code sequences, or other techniques. This may include paper-based lists of one-time passwords.

Authorization:

The process of determining what types of activities are permitted. Usually, authorization is in the context of authentication: once you have authenticated a user, they may be authorized different types of access or activity.

Bastion Host:

A system that has been hardened to resist attack, and which is nstalled on a network in such a way that it is expected to potentially come under attack. Bastion hosts are often components of firewalls, or may be “outside” Web servers or public access systems. Generally, a bastion host is running some form of general purpose operating system (e.g., Unix, VMS, NT, etc.) rather than a ROM-based or firmware operating system.

Challenge/Response:

An authentication technique whereby a server sends an unpredictable challenge to the user, who computes a response using some form of authentication token.

Chroot:

A technique under Unix whereby a process is permanently restricted to an isolated subset of the filesystem.

Cryptographic Checksum:

A one-way function applied to a file to produce a unique “fingerprint” of the file for later reference. Checksum systems are a primary means of detecting filesystem tampering on Unix.

Data Driven Attack:

A form of attack in which the attack is encoded in innocuous-seeming data which is executed by a user or other software to implement an attack. In the case of firewalls, a data driven attack is a concern since it may get through the firewall in data form and launch an attack against a system behind the firewall.

Defense in Depth:

The security approach whereby each system on the network is secured to the greatest possible degree. May be used in conjunction with firewalls.

DNS spoofing:

Assuming the DNS name of another system by either corrupting the name service cache of a victim system, or by compromising a domain name server for a valid domain.

Dual Homed Gateway:

A dual homed gateway is a system that has two or more network interfaces, each of which is connected to a different network. In firewall configurations, a dual homed gateway usually acts to block or filter some or all of the traffic trying to pass between the networks.

Encrypting Router:

see Tunneling Router and Virtual Network Perimeter.

Firewall:

A system or combination of systems that enforces a boundary between two or more networks.

Host-based Security:

The technique of securing an individual system from attack. Host based security is operating system and version dependent.

Insider Attack:

An attack originating from inside a protected network.

Intrusion Detection:

Detection of break-ins or break-in attempts either manually or via software expert systems that operate on logs or other information available on the network.

IP Spoofing:

An attack whereby a system attempts to illicitly impersonate another system by using its IP network address.

IP Splicing / Hijacking:

An attack whereby an active, established, session is intercepted and co-opted by the attacker. IP Splicing attacks may occur after an authentication has been made, permitting the attacker to assume the role of an already authorized user. Primary protections against IP Splicing rely on encryption at the session or network layer.

Least Privilege:

Designing operational aspects of a system to operate with a minimum amount of system privilege. This reduces the authorization level at which various actions are performed and decreases the chance that a process or user with high privileges may be caused to perform unauthorized activity resulting in a security breach.

Logging:

The process of storing information about events that occurred on the firewall or network.

Log Retention:

How long audit logs are retained and maintained.

Log Processing:

How audit logs are processed, searched for key events, or summarized.

Network-Level Firewall:

A firewall in which traffic is examined at the network protocol packet level.

Perimeter-based Security:

The technique of securing a network by controlling access to all entry and exit points of the network.

Policy:

Organization-level rules governing acceptable use of computing resources, security practices, and operational procedures.

Proxy:

A software agent that acts on behalf of a user. Typical proxies accept a connection from a user, make a decision as to whether or not the user or client IP address is permitted to use the proxy, perhaps does additional authentication, and then completes a connection on behalf of the user to a remote destination.

Screened Host:

A host on a network behind a screening router. The degree to which a screened host may be accessed depends on the screening rules in the router.

Screened Subnet:

A subnet behind a screening router. The degree to which the subnet may be accessed depends on the screening rules in the router.

Screening Router:

A router configured to permit or deny traffic based on a set of permission rules installed by the administrator.

Session Stealing:

See IP Splicing.

Trojan Horse:

A software entity that appears to do something normal but which, in fact, contains a trapdoor or attack program.

Tunneling Router:

A router or system capable of routing traffic by encrypting it and encapsulating it for transmission across an untrusted network, for eventual de-encapsulation and decryption.

Social Engineering:

An attack based on deceiving users or administrators at the target site. Social engineering attacks are typically carried out by telephoning users or operators and pretending to be an authorized user,

to attempt to gain illicit access to systems.

Virtual Network Perimeter:

A network that appears to be a single protected network behind firewalls, which actually encompasses encrypted virtual links over untrusted networks.

Virus:

A replicating code segment that attaches itself to a program or data file. Viruses might or might not not contain attack programs or trapdoors. Unfortunately, many have taken to calling any malicious code a “virus”. If you mean “trojan horse” or “worm”, say “trojan horse” or

“worm”.

Worm:

A standalone program that, when run, copies itself from one host to another, and then runs itself on each newly infected host. The widely reported “Internet Virus” of 1988 was not
a virus at all, but actually a worm.

http://www.bestitdocuments.com/Incident_response.html

 

Comments Off on Glossary of Firewall Related Terms

Internet Usage Stats

Posted in Business (600) by Guest on the May 29th, 2010
Comments Off on Internet Usage Stats

Network Integrity Means

Posted in Networking (340) by Guest on the May 28th, 2010

Worry Free Commerce / E-Commerce

Revenue Protection

Ensuring Profitability

Our security strategy goes beyond penetration testing and encryption and is meant to address a broader devising a methodology to answer who, what, when, where and how.

What is my span of control?

What pieces can I put in place?

http://bestitdocuments.com/Services.html

What is the most cost effective approach?

Where to focus?

           

Develop a project plan, Schedule

Report progress

Adapt to change and requirements

http://bestitdocuments.com/Services.html

 

Comments Off on Network Integrity Means

News Flash – We would like your technical input

Posted in Business (600) by Guest on the May 27th, 2010

Please help us improve the content of this Website and this blog. Please submit your IT Technology documents for consideraton to BestIT.Documents@yahoo.com your assistance is greatly appreciated.

We are all about fostering a Community of Interest (COIs) Working together for:

  • Deliverables
  • Develop (and harmonize) reusable data exchange components
  • Training and Technical Assistance
  • Leverage (Internal and external) data exchange components to build information exchanges

Tools

  • Utilize cross COI governance to maximize inter-operability and reuse
  • Documentation and standards
  • Publish and discover reusable information exchanges
  • Governance and Processes
  • Provide standards, tools, methods, training, technical assistance and implementation support services for enterprise-wide information exchange
  •  
Comments Off on News Flash – We would like your technical input

Firewall Security Lifecycle

Posted in Firewalls (75),Security (1500) by Guest on the May 27th, 2010

Define network domain security policy

Create high level structure

Examine other firewalls

Create low-level structure

Test firewall / Review security policy

Periodic testing /Maintenance

Firewall Product Evaluation Checklist

Identification – Who are we buying from

Education and Documentation – Is there sufficient and clear documentation that comes with the product?

Reports and Audits – What is available as far as reports and what audit tools accompany the product?

Attacks and Scenarios – What is our level of protection and what attacks does the current version protect against?

Administrative Concerns – How secure and flexible is the administrative access?

Implementation

The Bottom Line

A firewall is a method of achieving security between trusted and untrusted networks

The choice, configuration and operation of a firewall is defined by policy, which determines the the services and type of access permitted

Firewall = policy+implementation

Firewall = “zone of risk” for the trusted network

Support and not impose a security policy

Use a “deny all services except those specifically permitted” policy

Accommodate new facilities and services

Contain advanced authentication measures

Employ filtering techniques to permit or deny services to specific hosts and use flexible and user-friendly filtering

Use proxy services for applications

Handle dial-in

Log suspicious activity

http://bestitdocuments.com/Services.html

Comments Off on Firewall Security Lifecycle

Business Continuation Project Considerations

Posted in Business (600),Data Center - SOC - NOC by Guest on the May 27th, 2010

Top 8 Things to Communicate

Proactively ID and leverage existing information upfront

Have a clear definition of scope and deliverables

Team structure should include functional representation

Equalize the workload between your project content experts

Gather feedback along the way to make sure you’re meeting milestones

Establish top down support and participation

Build in project review at various points along the project timeline

Establish organization-wide project collaboration touch points

 

http://bestitdocuments.com/Services.html

Comments Off on Business Continuation Project Considerations

PowerPoint – PBX – Public Switched Telephone Network

Posted in Networking (340) by Guest on the May 27th, 2010

Public Switched Telephone Network PowerPoints

S_PSTN.PPT

PSTN.PPT 

PBX_Firewalls.ppt

http://bestitdocuments.com/Services.html 

Comments Off on PowerPoint – PBX – Public Switched Telephone Network

Network Performance Management Analysis

Posted in Networking (340) by Guest on the May 27th, 2010

What Is The Value Of The Network?

A network has no value by itself

Networks allow users and applications to access information

To be of value, a network must provide  

Reliable connectivity

Acceptable performance

New networking infrastructures are growing

Each technology increases performance, reliability, or both

Today’s Performance Management Tools

Performance Management:  Designing, testing, monitoring, and stressing computer networks  

Simulation and Modeling Tools

Passive Monitors

Traffic Generators

Simulation and Modeling Tools

Mathematical representations of the network

Rely on detailed information about network devices, links, protocols, and data traffic

Allow what-if analysis without changing the network

Identify bottlenecks in the network’s design  

Help minimize tariffs

Have difficulties accurately predicting performance in dynamic, complex, multi-vendor networks  

Passive Monitors

Monitor traffic passing on a LAN or WAN link

Examples include LAN analyzers, RMON probes, and SNMP agents

Give the most accurate view of the traffic flowing on segments of a network

Have difficulties providing an end-to-end view of network performance  

Traffic Generators

Create network traffic to proactively evaluate    performance and reliability

Simple applications such as Ping and FTP

Specialized products, e.g., packet generators

Can create controlled, repeatable network traffic

Difficult to simulate real application flows

Hard to extend to a production network  

A Reality Check

What happens when an ATM cell is dropped in a TCP connection?

Cell Generator

Lost one cell, the rest were fine

Real Applications

Lost one cell  

Discard the rest of the cells in the TCP frame

Sending side times out waiting for an acknowledgment

Retransmit lost frame

Go into TCP Slow Start  

What’s Missing

Simulate real application flows over multiple protocols

Measure end-to-end performance

Run on existing software and hardware

Scale for very large tests

Many users  

High bandwidth

Work well in test labs or in production networks

Be easy to use

Qualify Hardware and Software

Evaluate performance of protocol stacks

Evaluate performance of hardware vendors

Use same network configuration and applications to compare vendor results  

Profile Application Demands

Install  endpoints in the production network

Performance of new applications can be tested before deploying

What will user response time be?

What effect will they have on other applications?

Does the network or application need to be changed?  

Verify Network Changes

Use established benchmarks to test network performance

Make network changes to devices, routing tables, topology

Validate performance of new devices or topology using the same benchmarks  

Testing Data Compression

Choose the type of data sent between endpoints

Standard Calgary Corpus support as well as user-defined data

Evaluate performance of various devices

Test performance for remote access applications

http://bestitdocuments.com/Services.html

Comments Off on Network Performance Management Analysis

What is a process implementation ?

Posted in Business (600) by Guest on the May 26th, 2010

Very simple:

Where are you now? 

Where do we want to be?  

How do we get there?

http://bestitdocuments.com/Services.html

 

Comments Off on What is a process implementation ?

Disaster Recovery Requirements

Posted in Compliances (1300),Security (1500) by Guest on the May 26th, 2010

Set the institution’s definition of “disaster”

Driven by Business Impact

Priority of Mission Critical Applications

  Priority of Mission Critical Business and IT Services 

Define Requirements

Set Threshold for Recovery  

Questions to Consider:

What is the threshold on recovery time (RTO) and recovered data (RPO)?

What is the objective during disaster recovery period:

Minimum Basics function – i.e. online materials availability and course continuation?

Full Production Availability, including LDAP, Customizations/Building Blocks availability?

What is the plan for post – DR?  

Business Continuity Service considerations:

Recovery Time Objective (RTO)

RTO is the time-measured objective to have corporate Business Continuity Service operation up and running from the point in time that Corporate is made aware of the client’s primary Corporate applications system failure.

Recovery Point Objective (RPO)

RPO is the objective to minimize the loss of the client’s database and file storage content by constantly backing up the client’s information no less than the time guaranteed under each service level.  

Customizations & Configuration

Dependent on Client’s requirement and RTO & RPO objectives

The Human Factor:

Rigorous & Regular Training

Redundancy in Skill Sets

Plan for loss of critical staff in a DR event.

Change Management Control Tools – Central Authentication System, Automated Scripts, Documentation, etc.

Readiness Tests – e.g. Preparation Readiness Testing

Perform routine testing to ensure technology is working as expected  

Documentation

Disaster Recovery Procedures should be well documented.

Plan for the unexpected. 

Loss of critical staff

No Physical Access to the facility

Loss of traditional internet access to the facility

Install POTS line with serial connections to infrastructure

http://bestitdocuments.com/Services.html

Comments Off on Disaster Recovery Requirements

eGIS Technical Needs

Posted in Application (380) by Guest on the May 25th, 2010

High speed access into Data Center (Large Spatial Data Loads)

Administrative privileges over eGIS database schema’s

Security Requirements for  tGIS (external to Collaboration site)

Database user  privilege and security requirements

Requirements for eGIS data in other systems

Data Change Management  process and procedures

http://bestitdocuments.com/Services.html

 

Secure FTP for data transfer

Comments Off on eGIS Technical Needs

High Level – High Availability

Posted in Security (1500) by Guest on the May 25th, 2010

HA offers Application Resiliency

Critical Applications can remain active even when the primary hardware they rely on goes down

Applications can remain active through maintenance cycles and backups

  

HA offers the promise of minimal down time

Staff can remain working on HA equipment almost transparently

Customers can keep using services instead of receiving unavailable messages

Some disaster situations are eliminated completely

  

HA does require more administration

Configuration

Testing

Training

http://bestitdocuments.com/Services.html

Comments Off on High Level – High Availability

What is Strategic Outsourcing?

Posted in Business (600) by Guest on the May 24th, 2010

Strategic Outsourcing is the utilization of world class skills, technology and resources to consult, develop, and deploy business processes and IT solutions under a multiyear contractual relationship.  

Strategic Outsourcing thereby affords customers:

Increased competitive advantage in their industry  

Improved cost-to-benefit value relationship in their business  

Focus on core competencies  

Reasons to Outsource  

Improved speed-to-market  

Relationship/partner in transformation to e-business  

Focus on core competencies  

Access to intellectual capital  

Improved competitive position  

Access to scarce skills and resources with no long-term investment  

Disciplined process  

Shared Risk

http://bestitdocuments.com/Services.html

 

Comments Off on What is Strategic Outsourcing?

What is a Typical RFP?

Posted in Business (600) by Guest on the May 24th, 2010

Very detailed, well-constructed document 

Something that affects all business units  

Often Involves eCommerce, cross-media production and lots more  

Typicaly requires a detailed vendor response, including an on-site presentation and evaluation

http://bestitdocuments.com/Services.html

 

Comments Off on What is a Typical RFP?

Consideration – Disaster Recovery

Posted in Security (1500) by Guest on the May 23rd, 2010

What is it

“Ability to recover from the loss of a complete site, whether due to a natural disaster or malicious intent.”

“A plan of action to recover from an unlikely event of a severe or catastrophic business disruption.”

It’s NOT a planning for Mean-Time-To-Recovery (MTTR) from daily operational risks.

MTTR Choices

On-Demand, Redundant Scaling Technologies

Use of attached clustered storage allows quick use of client growth

2N Redundancy at Core Infrastructure Level

Burstable, Redundant Internet Connectivity

Load Balancing

Dual Core CPU, Caching, Hyper-threading

Clustering Technology

Recovery Oriented Choices:

Autonomic Capabilities – Datacenter / Network / Systems

Warm Standby (rack, stacked and powered up)

SnapMirror Technologies

Oracle DataGuard & RMAN Technologies

Cold Standby

“Platinum” Service Contracts from Service Providers and Vendors

N+1 Redundancy capabilities at client level

Levels of Data Backups

1st level backup:

Using snapshot technology, file systems & database backed up on Network Attached Storage systems

2nd level backup:

Using NetApp NearStore devices, 1st level backups are stored online, off-site for 30 days

3rd level backup:

Weekly backups are stored in tape, off-site for 30 days

http://bestitdocuments.com/Services.html

 

Comments Off on Consideration – Disaster Recovery

Information Technology

Posted in Business (600) by Guest on the May 22nd, 2010

Information Systems have become an essential element of our economic infrastructure.

As important as energy, transportation and financial systems.

Information system may be the most critical because all other systems are increasingly dependent on computer networks.

Over a very short period of our history, computer systems and the Internet have become the critical element of our social and economic infrastructure.

Valuable to broad elements of the society:

Enhance communication

Efficiency

Instant access to the world  

Information technology can used:

By anyone

To pursue any agenda.

Information technology can lead to:

Security threats

Crime

Loss of sensitive business information

Harm to corporate and brand-name reputation

Loss of privacy  

Misguided public policies can have a direct effect on the business community

The challenges is to find the line between the fair use of information and the abuse of the power that information technology can foster.  

The Public’s Right to Know

Company Confidential Information may be required to be released to government agencies and thus become public due to legal requirements or agencies policy.  

Electronic Commerce

The business community needs to have an active interest in the development of policies that are being put in place to control the growth of electronic commerce.  

Public disclosure of information

Information collected by the government can be an extremely valuable source of data for the competitive intelligence community.

The freedom of information act has been a powerful tool for those engaged in competitive intelligence.

Valuable information can be found at all levels of your organization

http://bestitdocuments.com/Services.html

 

Comments Off on Information Technology

What is a Blended Threat ?

Posted in Security (1500) by Guest on the May 22nd, 2010

Any threat that uses multiple means of propagation

AND requires an integrated response from more than one technology

Discovering Vulnerabilities

How vulnerabilities are discovered:

By accident or chance

Browsing through CVS entries, software development, bug databases, or change logs

Using source code scanning tools

ITS4, Flawfinder, or RATS

Utilizing vulnerability scanners

Manually analyzing software code or hardware

Exploiting Vulnerabilities

Why are the bad guys able to exploit vulnerabilities so quickly?

Most software is insecure and full of vulnerabilities

Due to poor software coding processes, lack of training, etc.

Vendors don’t typically patch vulnerabilities quick enough

Customers/Consumers don’t apply patches quickly enough

Many vulnerabilities have multiple attack vectors

http://bestitdocuments.com/Services.html

 

Comments Off on What is a Blended Threat ?

Data Archive Strategy

Posted in Security (1500) by Guest on the May 21st, 2010

The best backup strategy starts with the Restore!  

Determine what data needs to be archived

Create a plan

Base backup

Incremental backup

Differential backup  

Frequency and speed of data restore

Consider your network environment  

Operating systems (Windows, Unix, etc.)  

Firewalls (bandwidth, etc.)  

Routers, Switches  

Carefully consider the backup media  

NAS (Network Attached Storage) devices offer speed at a cost 

Tapes come in hundreds of types/speeds/storage capacities  

Stored off-site in a secure location

http://bestitdocuments.com/Services.html

 

Comments Off on Data Archive Strategy
Next Page »