ANALYSIS OF COMPOSITE MATERIAL USING
FINITE ELEMENT METHOD
ABSTRACT:
The objective of the paper is the finite element analysis of pressurized laminated composite shell with elliptical cutout / inclusion with two pressure load mechanisms and comparison with different materials.
Wednesday, January 6, 2010
MAGNETIC REFRIGERATION
MAGNETIC REFRIGERATION
a fruitful approach to reduce environmental pollution...
ABSTRACT :
The objective of this effort is to determine the feasibility of designing, fabricating and testing a sensor cooler, which uses solid materials as the refrigerant. These materials demonstrate the unique property known as the magneto caloric effect, which means that they increase and decrease in temperature when magnetized/demagnetized. This effect has been observed for many years and was used for cooling near absolute zero. Recently, materials are being developed which have sufficient temperature and entropy change to make them useful for a wide range of temperature applications. The proposed effort includes magneto caloric effect material selection, analyses, design and integration of components into a preliminary design. Benefits of this design are lower cost, longer life, lower weight and higher efficiency because it only requires one moving part - the rotating disk on which the magneto caloric material is mounted. The unit uses no gas compressor, no pumps, no working fluid, no valves, and no ozone-destroying chlorofluorocarbons/hydro chlorofluorocarbons (CFC's/HCFC's). Potential commercial applications include cooling of electronics, super conducting components used in telecommunications equipment (cell phone base stations), home and commercial refrigerators, heat pumps, air conditioning for homes, offices and automobiles, and virtually any place that refrigeration is needed.
INTRODUCTION:
Refrigeration:
Definition: Refrigeration is the process of reducing the temperature of any substance below that of the surrounding temperature using some working medium called refrigerants.
Initially refrigeration was used in the preservation of foodstuff by preventing bacterial action and this technology was further developed and extended its use in industrial applications. For example cool cutting oil helps in machining operations by lowering the temperature of work piece to prevent overheating, Quenching baths for heat treating operations, pharmaceutical field, etc are some of the industrial applications.
Conventional Refrigeration Vs Non-conventional (Magnetic) Refrigeration :
In conventional refrigeration system we need a medium for the removal of heat from the refrigerator to the surrounding atmosphere. This medium may be a solid, liquid or a gas. Some of the refrigerants which were used initially are ammonia (NH3), carbon dioxide (CO2), sulphur dioxide (SO2), etc. There are some drawbacks in the use of these refrigerants so refrigerants like F-11, F-12, F-22, F-113, etc are being used which are both economical as well as efficient.
Minimum temperature that can be obtained by these refrigerants is 0.71oK by boiling liquid helium under the smallest pressure obtainable. Temperatures below this range can be obtained only by the use of Non-Conventional refrigeration system.
Magnetic refrigeration is the method of refrigeration based on MAGNETOCALORIC EFFECT, which is defined as the response of a solid to an applied magnetic field, which is apparent as a change in its temperature.
Instead of ozone-depleting refrigerants and energy-consuming compressors found in conventional vapor-cycle refrigerators, this new style of refrigerator uses iron ammonium alum that heats up when exposed to a magnetic field, then cools down when the magnetic field is removed.
NON-CONVENTIONAL REFRIGERATION :
TYPES INCLUDE :
1.Thermo Electric Refrigeration.
2.Acoustic Refrigeration.
3.Magnetic Refrigeration.
MAGNETIC REFRIGERATION:
PRINCIPLE:
Magnetic refrigerants heat up when they are subjected to a magnetic field because the second law of thermodynamics states that the entropy - or disorder - of a closed system must increase with time. This is because the electron spins in the atoms of the material are aligned by the magnetic field, which reduces entropy. To compensate for this, the motion of the atoms becomes more random, and the material heats up. In a magnetic refrigerator, this heat would be carried away by water or by air. When the magnetic field is turned off, the electron spins become random again and the temperature of the material falls below that of its surroundings. This allows it to absorb more unwanted heat, and the cycle begins again.
Producing very low temperature through the process of adiabatic demagnetization can do refrigeration. The paramagnetic salt is suspended by a thread in a tube containing a low pressure of gaseous helium to provide thermal communication with the surrounding bath of pumped helium. In operation the liquid helium bath is cooled by pumping to the lowest practical pressure, usually achieving a temperature in the neighborhood of 1oK. The temperature of the paramagnetic salt approaches that of the helium bath by conduction through the exchange gas.
Next the magnetic field is turned on, causing heating of the salt and a decrease in entropy of the magnetic ions by virtue of their partial alignment in the direction of the applied field. The heat produced is conducted to the surrounding bath of liquid helium so that the temperature again approaches 1oK. If the magnetic field is increased slowly the heat can flow out, as it is generated-the magnetization being almost isothermal. Next the exchange gas surrounding the sample is removed by pumping, and now, with the salt thermally isolated, the magnetic field is turned off. The temperature of the sample decreases markedly as a consequence of the adiabatic demagnetization, which allows the magnetic ions to regain some of their entropy at the expense of the lattice energy of the salt.
The iron ammonium alum salt, originally in zero field (H=0,S=S1), is magnetized isothermally at the temperature T1, by increasing the magnetic field to H=H1.This magnetization, by orienting the magnetic ions of the salt and thus decreasing their disorder, causes a reduction in entropy from S1 to S2. Now the salt is isothermally isolated from its surroundings and thus when the magnetic field is reduced to zero the process follows the horizontal isentropic line and the temperature falls to 10K.The great decrease in temperature and the close approach zero is a consequence of the peculiar shape of the entropy-temperature relation
WORKING
The process flow diagram for the Magnetic Refrigeration system is show in the figure below. The mixture of water and ethanol serves as the heat transfer fluid for the system. The fluid first passes through the hot heat exchanger, which uses air to transfer heat to the atmosphere. The fluid then passes through the copper plates attached to the non-magnetized cooler Magneto caloric beds and loses heat.
CYCLE FOR MAGNETIC REFRIGERATION IN POSITION 1:
A fan blows air past this cold fluid into the freezer to keep the freezer temperature at approximately 0°F. The heat transfer fluid then gets heated up to 80°F as it passes through the copper plates adjoined by the magnetized warmer Magneto caloric Beds, where it continues to cycle around the loop. However, the Magneto caloric beds simultaneously move up and down, into and out of the magnetic field.
CYCLE FOR MAGNETIC REFRIGERATION IN POSITION 2 :
Figure below, shows how the cold air from the freezer is blown into the refrigerator by
the freezer fan. The temperature of the refrigerator section is kept around 39°F.
The typical household refrigerator has an internal volume of 21 cu.ft, where the freezer represents approximately 30% of this volume. Freezers are designed to maintain a temperature of 0oF. Refrigerators maintain a temperature of 39oF. The refrigerator will be insulated with polyurethane foam, one of the most common forms of insulation available. The refrigerator is kept cool by forcing cold air from the freezer into the refrigerator by using a small fan. The control system for machining the desired internal temperatures consists of two thermostats with on/off switches.
a fruitful approach to reduce environmental pollution...
ABSTRACT :
The objective of this effort is to determine the feasibility of designing, fabricating and testing a sensor cooler, which uses solid materials as the refrigerant. These materials demonstrate the unique property known as the magneto caloric effect, which means that they increase and decrease in temperature when magnetized/demagnetized. This effect has been observed for many years and was used for cooling near absolute zero. Recently, materials are being developed which have sufficient temperature and entropy change to make them useful for a wide range of temperature applications. The proposed effort includes magneto caloric effect material selection, analyses, design and integration of components into a preliminary design. Benefits of this design are lower cost, longer life, lower weight and higher efficiency because it only requires one moving part - the rotating disk on which the magneto caloric material is mounted. The unit uses no gas compressor, no pumps, no working fluid, no valves, and no ozone-destroying chlorofluorocarbons/hydro chlorofluorocarbons (CFC's/HCFC's). Potential commercial applications include cooling of electronics, super conducting components used in telecommunications equipment (cell phone base stations), home and commercial refrigerators, heat pumps, air conditioning for homes, offices and automobiles, and virtually any place that refrigeration is needed.
INTRODUCTION:
Refrigeration:
Definition: Refrigeration is the process of reducing the temperature of any substance below that of the surrounding temperature using some working medium called refrigerants.
Initially refrigeration was used in the preservation of foodstuff by preventing bacterial action and this technology was further developed and extended its use in industrial applications. For example cool cutting oil helps in machining operations by lowering the temperature of work piece to prevent overheating, Quenching baths for heat treating operations, pharmaceutical field, etc are some of the industrial applications.
Conventional Refrigeration Vs Non-conventional (Magnetic) Refrigeration :
In conventional refrigeration system we need a medium for the removal of heat from the refrigerator to the surrounding atmosphere. This medium may be a solid, liquid or a gas. Some of the refrigerants which were used initially are ammonia (NH3), carbon dioxide (CO2), sulphur dioxide (SO2), etc. There are some drawbacks in the use of these refrigerants so refrigerants like F-11, F-12, F-22, F-113, etc are being used which are both economical as well as efficient.
Minimum temperature that can be obtained by these refrigerants is 0.71oK by boiling liquid helium under the smallest pressure obtainable. Temperatures below this range can be obtained only by the use of Non-Conventional refrigeration system.
Magnetic refrigeration is the method of refrigeration based on MAGNETOCALORIC EFFECT, which is defined as the response of a solid to an applied magnetic field, which is apparent as a change in its temperature.
Instead of ozone-depleting refrigerants and energy-consuming compressors found in conventional vapor-cycle refrigerators, this new style of refrigerator uses iron ammonium alum that heats up when exposed to a magnetic field, then cools down when the magnetic field is removed.
NON-CONVENTIONAL REFRIGERATION :
TYPES INCLUDE :
1.Thermo Electric Refrigeration.
2.Acoustic Refrigeration.
3.Magnetic Refrigeration.
MAGNETIC REFRIGERATION:
PRINCIPLE:
Magnetic refrigerants heat up when they are subjected to a magnetic field because the second law of thermodynamics states that the entropy - or disorder - of a closed system must increase with time. This is because the electron spins in the atoms of the material are aligned by the magnetic field, which reduces entropy. To compensate for this, the motion of the atoms becomes more random, and the material heats up. In a magnetic refrigerator, this heat would be carried away by water or by air. When the magnetic field is turned off, the electron spins become random again and the temperature of the material falls below that of its surroundings. This allows it to absorb more unwanted heat, and the cycle begins again.
Producing very low temperature through the process of adiabatic demagnetization can do refrigeration. The paramagnetic salt is suspended by a thread in a tube containing a low pressure of gaseous helium to provide thermal communication with the surrounding bath of pumped helium. In operation the liquid helium bath is cooled by pumping to the lowest practical pressure, usually achieving a temperature in the neighborhood of 1oK. The temperature of the paramagnetic salt approaches that of the helium bath by conduction through the exchange gas.
Next the magnetic field is turned on, causing heating of the salt and a decrease in entropy of the magnetic ions by virtue of their partial alignment in the direction of the applied field. The heat produced is conducted to the surrounding bath of liquid helium so that the temperature again approaches 1oK. If the magnetic field is increased slowly the heat can flow out, as it is generated-the magnetization being almost isothermal. Next the exchange gas surrounding the sample is removed by pumping, and now, with the salt thermally isolated, the magnetic field is turned off. The temperature of the sample decreases markedly as a consequence of the adiabatic demagnetization, which allows the magnetic ions to regain some of their entropy at the expense of the lattice energy of the salt.
The iron ammonium alum salt, originally in zero field (H=0,S=S1), is magnetized isothermally at the temperature T1, by increasing the magnetic field to H=H1.This magnetization, by orienting the magnetic ions of the salt and thus decreasing their disorder, causes a reduction in entropy from S1 to S2. Now the salt is isothermally isolated from its surroundings and thus when the magnetic field is reduced to zero the process follows the horizontal isentropic line and the temperature falls to 10K.The great decrease in temperature and the close approach zero is a consequence of the peculiar shape of the entropy-temperature relation
WORKING
The process flow diagram for the Magnetic Refrigeration system is show in the figure below. The mixture of water and ethanol serves as the heat transfer fluid for the system. The fluid first passes through the hot heat exchanger, which uses air to transfer heat to the atmosphere. The fluid then passes through the copper plates attached to the non-magnetized cooler Magneto caloric beds and loses heat.
CYCLE FOR MAGNETIC REFRIGERATION IN POSITION 1:
A fan blows air past this cold fluid into the freezer to keep the freezer temperature at approximately 0°F. The heat transfer fluid then gets heated up to 80°F as it passes through the copper plates adjoined by the magnetized warmer Magneto caloric Beds, where it continues to cycle around the loop. However, the Magneto caloric beds simultaneously move up and down, into and out of the magnetic field.
CYCLE FOR MAGNETIC REFRIGERATION IN POSITION 2 :
Figure below, shows how the cold air from the freezer is blown into the refrigerator by
the freezer fan. The temperature of the refrigerator section is kept around 39°F.
The typical household refrigerator has an internal volume of 21 cu.ft, where the freezer represents approximately 30% of this volume. Freezers are designed to maintain a temperature of 0oF. Refrigerators maintain a temperature of 39oF. The refrigerator will be insulated with polyurethane foam, one of the most common forms of insulation available. The refrigerator is kept cool by forcing cold air from the freezer into the refrigerator by using a small fan. The control system for machining the desired internal temperatures consists of two thermostats with on/off switches.
FRAUD DETECTION IN CREDIT CARD TRANSACTION USING ENHANCED DC-1 DATA MINING ALGORITHM AND ITS PREVENTION
FRAUD DETECTION IN CREDIT CARD TRANSACTION
ABSTRACT
Frauds have plagued telecommunication industries, financial institutions and other organizations for a long time. The type of fraud addressed in this paper is credit card transaction fraud. This fraud cost the businesses millions of dollars per year. As a result, fraud detection has become an important and urgent task for these businesses. At present a number of methods have been implemented to detect frauds, from both statistical approaches (e.g. data mining) and hardware approaches (e.g. firewalls, smart cards).
Currently, data mining is a popular way to combat frauds because of its effectiveness. Data mining is “a well-defined procedure that takes data as input and produces output in the forms of models or patterns.” In other words, the task of data mining is to analyze a massive amount of data and to extract some usable information that we can interpret for future uses.
OUR IMPLEMENTATION
In this paper we have enhanced the First Detector Constructor systems technique called DC-1 for detecting credit card frauds. We have also discussed some data mining techniques for fraud detection. Our paper outlines the steps for online credit card fraud detection and proposed a prevention technique.
TABLE OF CONTENTS
1.INTRODUCTION
2.TYPES OF CREDIT CARD FRAUDS
2.1.INDUSTRY FRAUDS
2.1.1.STOLEN CARDS
2.1.2.APPLICATION FRAUD
2.1.3.CARDHOLDER-NOT-PRESENT FRAUD
2.1.4.COUNTERFIET CARDS
2.2.ONLINE FRAUD
2.2.1.ORGANISED FRAUD
2.2.2.OPPORTUNISTIC FRAUD
2.2.3.CARDHOLDER FRAUD
3.FRAUD DETECTION USING DATA MINING TECHNIQUES
4.OUR IMPLEMENTATION-ENHANCED DC-1 ALGORITHM
4.1.DC-1 FRAMEWORK
4.2.EXPLANATION
4.3.THE ENHANCED DC-1 ALGORITHM
4.3.1.K-MEANS PROCEDURE
4.3.2.THE FINAL ENHANCED ALGORITHM
4.4.ADVANTAGES OF ENHANCED DC-1 ALGORITHM
5.OUR SUGGESTIONS FOR CONTROLLING ONLINE FRAUD
6.OUR SUGGESTION FOR FRAUD PREVENTION
7.CONCLUSION
1. INTRODUCTION
The Concise Oxford Dictionary defines fraud as ‘criminal deception; the use of false representations to gain an unjust advantage'. Fraud is as old as humanity itself, and can take an unlimited variety of different forms. We begin by distinguishing between fraud prevention and fraud detection. Fraud prevention describes measures to stop fraud occurring in the first place. In contrast, fraud detection involves identifying fraud as quickly as possible once it has been perpetrated. Fraud detection comes into play once fraud prevention has failed. Fraud detection is a continuously evolving discipline. Whenever it becomes known that one detection method is in place, criminals will adapt their strategies and try others.
In this paper we have detected credit card frauds using Data Mining techniques. Data Mining is the process of automated extraction of predictive information from large databases. It predicts future trends and finds behavior that the experts may miss as it lies beyond their expectations. Data mining is part of a larger process called knowledge discovery; specifically, the step in which advanced statistical analysis and modeling techniques are applied to the data to find useful patterns and relationships. This paper will present an overview of the traditional Predictive modeling technique for fraud detection and enhanced the DC-1 data mining algorithm for the same.
2. TYPES OF CREDIT CARD FRAUDS
2.1 INDUSTRY FRAUDS
Credit card fraud may be perpetrated in various ways, including simple theft, application fraud and counterfeit cards. In all of these, the fraudster uses a physical card, but physical possession is not essential in order to perpetrate credit card fraud: one of the major areas is ‘cardholder-not-present’ fraud, where only the card details are given (over the phone).
2.1.1 STOLEN CARD
Use of a stolen card is perhaps the most straightforward type of credit card fraud.
In this case, the fraudster typically spends as much as possible in as short a space of time as possible, before the theft is detected and the card stopped, so that detecting the theft early can prevent large losses.
2.1.2 APPLICATION FRAUD
Application fraud arises when individuals obtain new credit cards from issuing companies using false personal information. Traditional credit scorecards are used to detect customers who are likely to default, and the reasons for this may include fraud. Such scorecards are based on the details given on the application forms, and perhaps also on other details, such as bureau information. Statistical models, which monitor behaviour over time, can be used to detect cards, which have been obtained from a fraudulent application (e.g. a first time card holder who runs out and rapidly makes many purchases should arouse suspicion). With application fraud, however, urgency is not so important to the fraudster, and it might not be until accounts are sent out or repayment dates begin to pass that fraud is suspected.
2.1.3 CARDHOLDER-NOT-PRESENT FRAUD
Cardholder-not-present fraud occurs when the transaction is made remotely, so that only the card’s details are needed, and a manual signature and card imprint are not required at the time of purchase. Such transactions include telephone sales and online transactions, and this type of fraud accounts for a high proportion of losses. To undertake such fraud it is necessary to obtain the details of the card without the cardholder’s knowledge. This is done in various ways, including ‘skimming’, where employees illegally copy the magnetic stripe on a credit card by swiping it through a small handheld card reader, ‘shoulder surfers’ who enter card details into a mobile phone while standing behind a purchaser in a queue, and people posing as credit card company employees taking details of credit card transactions from companies over the phone.
2.1.4 COUNTERFIET CARDS
Counterfeit cards, currently the largest source of credit card fraud, can also be created using the information over phones. Transactions made by fraudsters using counterfeit cards and making cardholder-not-present purchases can be detected through methods, which seek changes in transaction patterns, as well as checking for particular patterns which are known to be indicative of counterfeit.
2.2 ONLINE FRAUD
Online credit card fraud against merchants can be broken out into three major categories:
Organized Fraud
Opportunistic Fraud
Cardholder Fraud
2.2.1 ORGANIZED FRAUD
It is a form of organized crime. The criminals use identity theft or some other means to apply for valid credit cards under someone else's name. Once issued, they set up a drop location where they have goods delivered to (usually a vacant house or apartment) and they spend the cards up to their limit. When the bill comes 30 - 45 days later, there's nobody there to pay it and the criminals move on to another credit card. A minor variation on this theme is the hacker/cracker using software to generate seemingly valid credit card numbers. Both types of criminals are normally looking for items that can be easily converted into cash. These are probably the hardest criminals to catch because they know all the ins and outs of the system and are constantly altering their techniques as soon as an anti-fraud measure begins to show any level of success.
2.2.2 OPPORTUNISTIC FRAUD
It is, quite simply, fraud that is committed because the opportunity happens to present itself. Perhaps a waiter, a little short on cash, copies down the credit card info from a customer and then goes online and buys his wife a nice birthday present. There are a million variations on this but essentially; the person committing fraud doesn't normally do this for a living. They are amateurs who happened to take advantage of an opportunity.
2.2.3 CARDHOLDER FRAUD
It is when the legitimate cardholder is the person committing fraud. Sometimes they claim they never received the merchandise. Sometimes they claim they never ordered the merchandise. Whatever the excuse, the cardholder knows how card not present transactions are treated by the credit card companies and aims to take advantage of the system. Even if the merchant calls the customer and confirms that they placed the order, when the bill comes they can claim they never heard of the company and the credit card company will stick the merchant with the liability. A minor variation on this type of fraud is the spouse or children who use the card and then deny the charges. Usually the actual cardholder is completely ignorant of the unauthorized use but the result is still the same for the merchant.
3. FRAUD DETECTION USING DATA MINING TECHNIQUES
Data mining techniques go well beyond the limitations of simple exceptions reporting by identifying suspicious cases based on patterns in the data that are suggestive of fraud. Patterns in data that can be indicative of fraud can have one or more of the following characteristics:
Unusual data values which deviate from the norm in some way
Unusual relationships among data values and/or records
Changes in the behavior of those involved in the transactions.
Characteristic Data Mining Technique
Unusual data Outlier Analysis;
Frequency of occurrence;
Cluster Analysis;
Algorithms
Unusual relationships Outlier Analysis;
Frequency of occurrence;
Cluster Analysis;
Link Analysis.
Changes in behavior Outlier Analysis;
Frequency of occurrence.
These are some of the data mining techniques for detecting fraudulent transactions having above characteristics.
4. OUR IMPLEMENTATION-ENHANCED DC-1 ALGORITHM
4.1 DC-1 FRAMEWORK
Our approach to building a fraud detection system is to classify individual transactions as fraudulent and legitimate. In sum, the problem comprises three questions, corresponding to a component in the framework. The questions are:
1.Which transactions are important? Which features or combination of features are useful for distinguishing legitimate behavior from fraudulent ones?
2. How should profiles be created? Given an important feature, how should we
characterize/profile the behavior of a credit card holder with respect to the feature, in order to notice important changes?
3. When should alarms be issued? Given the results of profiling behavior based on multiple criteria, how should they be combined to be effective in determining when fraud has occurred?
FIRST DETECTOR CONSTRUCTOR FRAMEWORK
TRANSACTIONS
RULES
MONITOR
TEMPLATES
PROFILING
MONITORS
4.2 EXPLANATION
The Detector Constructor framework (DC-1) starts with analyzing available transaction records including fraudulent transactions.
(1) CLASSIFICATION RULE LEARNING
First, based on the given history of an account, transactions of an account are analyzed and labeled as fraudulent transactions and legitimate (non-fraudulent) transactions. The local set of rules for the account is searched. For example, for one specific account, the following classification rule is devised
(No. of transactions>=restricted no. of transactions) AND (amount of transactions> Threshold value) = Fraud transaction.
However, it is required to have a set of rules, a priori rules that can perform as fraud indicators, since the rules generated are specific to one single account. In order to generate rules that can apply to as many accounts as possible, this algorithm is devised, controlled by two parameters such as Trules and Taccts. Trules is defined as a threshold on the number of rules required to cover each account, and Taccts is defined as the number of accounts which a rule must have been found in to be selected at all. After an account is examined with a certain number of rules and a rule is applied to a certain number of accounts, a rule is selected. The list of rules generated from each account is reviewed.
Finally, the rule that appears the most frequently from the list of the entire account set is chosen.
(2) CONSTRUCTION OF PROFILING MONITORS
After rules are selected, a set of monitors is built. The purpose of profiling monitors is to investigate the sensitivities of accounts to general rules. The construction of profiling monitors consists of two stages, a profiling stage and a usage stage. In the profiling stage, a general rule is applied to a portion of an account’s legitimate usage to evaluate the account’s normal activities. In other words, legitimate activities of an account are summarized into profiling monitors through the use of templates. The statistics of the account’s normal activities is saved to that account. Later, in the usage stage, the monitor is applied to the whole part of the account (i.e. account for a month). The resulting statistics can be used to examine the abnormality of the usage of the account per month.
During this process, the profiling monitors are built by the monitor constructor, which is a set of templates. These templates examine the conditions of the rules. Based on the result of it, each rule-template is finally derived as a profiling monitor. For example, templates are made up with various statistical expressions such as a threshold monitor and a standard deviation monitor. In the threshold monitor, binary categorizations are made according to whether the user’s behavior of a day exceeds the threshold defined with the portion of a day. Also, in the standard deviation monitor, different output values are defined according to how much the user’s behavior in that month deviates from the rule’s condition defined in that year.
(3) COMBINATION OF EVIDENCE FROM THE MONITORS
To improve the confidence of the detection, monitors are combined with evidence resulted from the application of monitors to the sample data. For example, monitors generated are applied to a sample account for a month, and their outputs, whether fraudulent activities are detected or not, are expressed as a result vector for that month. The evidence about the account for that month, whether the account month truly has frauds or not, is introduced together with the outputs. Then, the outputs are weighted with the combination of evidence. Also, the combination of evidence is trained with the threshold value based on the sum of weights. Hence, it is possible to put more confidence on monitors with larger weights to prevent false alarms. After all, there may exist redundant and ineffective rules.
To reduce the number of monitors, it proposes the use of a sequential forward selection process. Finally, fraud detectors are selected from monitors combined with evidence.
4.3 THE ENHANCED DC-1 ALGORITHM
We have enhanced the DC-1 algorithm by first clustering the data sets using K-means algorithm and then applying DC-1 technique. The rules Ra generated in the DC-1 algorithm is also clustered using K-means algorithm.
Clustering is a popular approach to implementing the partitioning operation. Clustering methods partition a set of objects into clusters such that objects in the same cluster are more similar to each other than objects in different clusters according to some defined criteria. The k-means algorithm is well known for its efficiency in clustering large data sets.
4.3.1 K-means procedure
Given a set of numeric objects X and an integer number k (<=n), the k-means algorithm searches for a partition of X into k clusters that minimizes the within groups sum of squared errors. This process is often formulated as the following mathematical program problem P
k n
Minimize P (W, Q) = wi, l d(Xi , Ql)
l=1 i=1
k
subject to wi, l = 1, 1<= i <= n
l=1
wi, l 0,1 , 1<= i <=n, 1<= l<=k
where W is an n k partition matrix, Q = Q1, Q2, . . . , QK is a set of objects in the same object domain, and d(. , .) is the squared Euclidean distance between two objects.
4.3.2 THE FINAL ENHANCED ALGORITHM
Given:
Accts: set of all accounts obtained after clustering using K-means (i.e.) the set Q.
Rules: set of all fraud rules generated from Accts
Trules : (parameter) Number of rules required to cover each account
Taccts : (parameter) Number of accounts in which a rule must have been found
Output:
S: set of selected rules.
1. /*Initialization*/
2. S = {};
3. for (a Accts) do Cover[a] = 0;
4. for (r Rules) do
5. Occur[r] = 0; /*Number of accounts in which r occurs*/
6. AcctsGen[r] = {}; /*Set of accounts generating r */
7. end for
8. /* Set up Occur and AcctsGen */
9. for (a Accts) do
10. Ra = set of rules generated from a;
11. for (r Ra) do
12. Occur[r] : = Occur[r] + 1;
13. add a to AcctsGen[r];
14. end for; end for
15. Call K-means procedure to cluster Ra ; /* rules are clustered here using K-
means */
16. /* Cover Accts with Rules */
17. for (a Accts) do
18. Ra = list of rules generated from a;
19. sort Ra by Occur;
20. while (cover[a] < Trules) do
21. r := highest-occurrence rule from Ra
22. Remove r from Ra
23. if (r S and Occur[r] Taccts ) then
24. add r to S;
25. for (a2 AcctsGen[r]) do
26. Cover[a2] = Cover[a2] + 1;
27. end for; end if
28. end while; end for
4.4 ADVANTAGES OF ENHANCED DC-1 ALGORITHM
It is efficient in processing large data sets.
It often terminates at a local optimum.
It works only on numeric values.
The clusters have convex shapes i.e. they are bell shaped curves.
So it is easier to find the maximum cover of rules generated to distinguish legitimate from fraudulent transactions.
5. OUR SUGGESTIONS FOR CONTROLLING ONLINE FRAUD
1. Do Mod10 algorithm testing. Mod10 is an algorithm that will tell you if the card number being presented could be a valid card number. It doesn't mean that number was ever issued, or that the card number is an active account, but it will tell you whether the digits the customer typed in could be in the range of valid credit card numbers issued by the major credit card companies. This test should be the first test applied to any credit card number you process. If the card fails Mod10, it will fail all other attempts to authenticate and process a charge against the card.
2. Obtain an authorization and AVS check on every transaction. When a merchant processes a credit card transaction, normally they must receive an authorization for the amount of the order. This usually guarantees that the card is a valid card number and that the person has available credit for the amount being requested. The credit card companies make available AVS (Address Verification Service), which you can use to further verify the validity of the card. AVS matches the billing address provided by the customer with the zip code held on file at the issuing bank. While there are numerous reasons why the card may fail AVS (recent change of address, AVS computers down, etc.), an AVS failure should be a red flag that needs further investigation.
3. Be extremely wary of orders where the shipping and billing addresses are not the same. Obviously if you are in a business that sells items traditionally given as gifts (flowers would be an example) this may be difficult but if the majority of your customers bill to the same address they shop to, be cautious of orders that are being shipped to a different address.
4. All newly issued credit and debit cards carry a 3 digit non-embossed number on the back of the card. This number is not included in the data contained on the magnetic stripe of the card and is not printed on credit card statements or anywhere else.
5. Pay extra attention to orders that are for amounts greater than the norm or consist mostly of one type of item. Criminals trying to commit fraud will often place large orders for specific items that they know they can resell easily. For instance, if you sell DVDs and you receive an order to 25 of the same title, you should investigate further. Customers who place multiple small orders should draw your attention as well. Some criminals are aware that cautious merchants scrutinize large transactions so the criminal simply places many smaller orders rather than one large one.
6. Be suspicious of orders that are placed for rush or expedited delivery. Since criminals aren't paying the shipping fees they normally don't care about the extra cost and they want the order shipped as quickly as possible. The longer the order sits around before shipping the greater the chance the fraud will be uncovered.
7. Any order consisting mostly or entirely of high ticket items should receive extra scrutiny. High-ticket items usually have a high resell value so they tend to be on the shopping list of many criminals.
8. Be alert of orders that originate from email addresses issued by free hosting providers like yahoo.com, hotmail.com, etc. Many sites simply will not accept orders from email address originating at free hosting providers.
9. Keep an eye out for orders from multiple accounts/credit-card-numbers being shipped to the same delivery address. This may indicate a drop box or drop location where criminals are having orders delivered to.
10. Orders being shipped to an international address should earn a closer inspection. Pay particular attention if the card or the shipping address is in an area prone to credit card fraud.
11. Watch for multiple orders being placed over a short period of time. Many criminals will attempt to run up a card before the owner finds out or in the case of a stolen identity before the first bill arrives.
12. Pick up the phone. If you have any suspicions about an order call the contact phone number given by the customer and attempt to confirm the details of the order. If you still don't feel comfortable, call the issuing bank and ask to confirm the account details.
6. OUR SUGGESTION FOR FRAUD PREVENTION
Once again we stress that fraud prevention describes measures to stop fraud occurring in the first place. This can be done by some of the biometric techniques like
Finger prints
Iris recognition
Facial recognition
Out of these we suggest that iris recognition can be used efficiently in credit card
fraud prevention because iris code (binary code ) can be easily stored in the credit card and easily detected through credit card sensing machines where a camera should be attached.
. When a person stands before the credit card sensing machine, his iris is captured through the camera and converted into a binary code, which should match the original code stored in the card. If not, there will be no further transactions and hence there is no chance for fraud occurring in the first place.
7. CONCLUSION
Fraud is a deliberate deception to obtain assets or resources. In the digital world where speed and anonymity reign, this deception is costly and pervasive. Several criteria can be used when fraud is to be detected. Our algorithm and our suggestive steps for online fraud detection satisfies the following criteria:
For cost-management, fraud-screening mechanisms should be internal to the processing system, not outsourced to a third-party.
Fraud programs must be independently accessible and adaptable to the changing needs of the merchant.
For effective fraud screening, the process should be multi-tiered such that there are multiple levels of approval required prior to dispatch for final authorization.
Real-time transaction reports should be easily and independently accessible to the merchant.
Systems that refer to databases of past and present fraudulent cards and customer information provide tremendous value to merchants and should be part of the program.
The responsibility and risk of fraud is multi-faceted. Merchants, financial institutions and e-payment processors must be vigilant about fraud prevention, and must work in tandem to insure the continued success and growth of this new commerce frontier.
REFERENCES
1. Data Mining and Knowledge Discovery in Databases,
http://www.cs.sfu.ca/research/groups/DB/sections/publication/kdd/kdd.html
2. Burge, P. and Shawe-Taylor, J. (1997). Detecting cellular fraud using adaptive
prototypes. AAAI Workshop: AI Approaches to Fraud Detection and Risk
Management, 9-13
3. Ralambondrainy, H. 1995. A conceptual version of the k-means algorithm. Pattern Recognition Letters, 16:1147–1157.
ABSTRACT
Frauds have plagued telecommunication industries, financial institutions and other organizations for a long time. The type of fraud addressed in this paper is credit card transaction fraud. This fraud cost the businesses millions of dollars per year. As a result, fraud detection has become an important and urgent task for these businesses. At present a number of methods have been implemented to detect frauds, from both statistical approaches (e.g. data mining) and hardware approaches (e.g. firewalls, smart cards).
Currently, data mining is a popular way to combat frauds because of its effectiveness. Data mining is “a well-defined procedure that takes data as input and produces output in the forms of models or patterns.” In other words, the task of data mining is to analyze a massive amount of data and to extract some usable information that we can interpret for future uses.
OUR IMPLEMENTATION
In this paper we have enhanced the First Detector Constructor systems technique called DC-1 for detecting credit card frauds. We have also discussed some data mining techniques for fraud detection. Our paper outlines the steps for online credit card fraud detection and proposed a prevention technique.
TABLE OF CONTENTS
1.INTRODUCTION
2.TYPES OF CREDIT CARD FRAUDS
2.1.INDUSTRY FRAUDS
2.1.1.STOLEN CARDS
2.1.2.APPLICATION FRAUD
2.1.3.CARDHOLDER-NOT-PRESENT FRAUD
2.1.4.COUNTERFIET CARDS
2.2.ONLINE FRAUD
2.2.1.ORGANISED FRAUD
2.2.2.OPPORTUNISTIC FRAUD
2.2.3.CARDHOLDER FRAUD
3.FRAUD DETECTION USING DATA MINING TECHNIQUES
4.OUR IMPLEMENTATION-ENHANCED DC-1 ALGORITHM
4.1.DC-1 FRAMEWORK
4.2.EXPLANATION
4.3.THE ENHANCED DC-1 ALGORITHM
4.3.1.K-MEANS PROCEDURE
4.3.2.THE FINAL ENHANCED ALGORITHM
4.4.ADVANTAGES OF ENHANCED DC-1 ALGORITHM
5.OUR SUGGESTIONS FOR CONTROLLING ONLINE FRAUD
6.OUR SUGGESTION FOR FRAUD PREVENTION
7.CONCLUSION
1. INTRODUCTION
The Concise Oxford Dictionary defines fraud as ‘criminal deception; the use of false representations to gain an unjust advantage'. Fraud is as old as humanity itself, and can take an unlimited variety of different forms. We begin by distinguishing between fraud prevention and fraud detection. Fraud prevention describes measures to stop fraud occurring in the first place. In contrast, fraud detection involves identifying fraud as quickly as possible once it has been perpetrated. Fraud detection comes into play once fraud prevention has failed. Fraud detection is a continuously evolving discipline. Whenever it becomes known that one detection method is in place, criminals will adapt their strategies and try others.
In this paper we have detected credit card frauds using Data Mining techniques. Data Mining is the process of automated extraction of predictive information from large databases. It predicts future trends and finds behavior that the experts may miss as it lies beyond their expectations. Data mining is part of a larger process called knowledge discovery; specifically, the step in which advanced statistical analysis and modeling techniques are applied to the data to find useful patterns and relationships. This paper will present an overview of the traditional Predictive modeling technique for fraud detection and enhanced the DC-1 data mining algorithm for the same.
2. TYPES OF CREDIT CARD FRAUDS
2.1 INDUSTRY FRAUDS
Credit card fraud may be perpetrated in various ways, including simple theft, application fraud and counterfeit cards. In all of these, the fraudster uses a physical card, but physical possession is not essential in order to perpetrate credit card fraud: one of the major areas is ‘cardholder-not-present’ fraud, where only the card details are given (over the phone).
2.1.1 STOLEN CARD
Use of a stolen card is perhaps the most straightforward type of credit card fraud.
In this case, the fraudster typically spends as much as possible in as short a space of time as possible, before the theft is detected and the card stopped, so that detecting the theft early can prevent large losses.
2.1.2 APPLICATION FRAUD
Application fraud arises when individuals obtain new credit cards from issuing companies using false personal information. Traditional credit scorecards are used to detect customers who are likely to default, and the reasons for this may include fraud. Such scorecards are based on the details given on the application forms, and perhaps also on other details, such as bureau information. Statistical models, which monitor behaviour over time, can be used to detect cards, which have been obtained from a fraudulent application (e.g. a first time card holder who runs out and rapidly makes many purchases should arouse suspicion). With application fraud, however, urgency is not so important to the fraudster, and it might not be until accounts are sent out or repayment dates begin to pass that fraud is suspected.
2.1.3 CARDHOLDER-NOT-PRESENT FRAUD
Cardholder-not-present fraud occurs when the transaction is made remotely, so that only the card’s details are needed, and a manual signature and card imprint are not required at the time of purchase. Such transactions include telephone sales and online transactions, and this type of fraud accounts for a high proportion of losses. To undertake such fraud it is necessary to obtain the details of the card without the cardholder’s knowledge. This is done in various ways, including ‘skimming’, where employees illegally copy the magnetic stripe on a credit card by swiping it through a small handheld card reader, ‘shoulder surfers’ who enter card details into a mobile phone while standing behind a purchaser in a queue, and people posing as credit card company employees taking details of credit card transactions from companies over the phone.
2.1.4 COUNTERFIET CARDS
Counterfeit cards, currently the largest source of credit card fraud, can also be created using the information over phones. Transactions made by fraudsters using counterfeit cards and making cardholder-not-present purchases can be detected through methods, which seek changes in transaction patterns, as well as checking for particular patterns which are known to be indicative of counterfeit.
2.2 ONLINE FRAUD
Online credit card fraud against merchants can be broken out into three major categories:
Organized Fraud
Opportunistic Fraud
Cardholder Fraud
2.2.1 ORGANIZED FRAUD
It is a form of organized crime. The criminals use identity theft or some other means to apply for valid credit cards under someone else's name. Once issued, they set up a drop location where they have goods delivered to (usually a vacant house or apartment) and they spend the cards up to their limit. When the bill comes 30 - 45 days later, there's nobody there to pay it and the criminals move on to another credit card. A minor variation on this theme is the hacker/cracker using software to generate seemingly valid credit card numbers. Both types of criminals are normally looking for items that can be easily converted into cash. These are probably the hardest criminals to catch because they know all the ins and outs of the system and are constantly altering their techniques as soon as an anti-fraud measure begins to show any level of success.
2.2.2 OPPORTUNISTIC FRAUD
It is, quite simply, fraud that is committed because the opportunity happens to present itself. Perhaps a waiter, a little short on cash, copies down the credit card info from a customer and then goes online and buys his wife a nice birthday present. There are a million variations on this but essentially; the person committing fraud doesn't normally do this for a living. They are amateurs who happened to take advantage of an opportunity.
2.2.3 CARDHOLDER FRAUD
It is when the legitimate cardholder is the person committing fraud. Sometimes they claim they never received the merchandise. Sometimes they claim they never ordered the merchandise. Whatever the excuse, the cardholder knows how card not present transactions are treated by the credit card companies and aims to take advantage of the system. Even if the merchant calls the customer and confirms that they placed the order, when the bill comes they can claim they never heard of the company and the credit card company will stick the merchant with the liability. A minor variation on this type of fraud is the spouse or children who use the card and then deny the charges. Usually the actual cardholder is completely ignorant of the unauthorized use but the result is still the same for the merchant.
3. FRAUD DETECTION USING DATA MINING TECHNIQUES
Data mining techniques go well beyond the limitations of simple exceptions reporting by identifying suspicious cases based on patterns in the data that are suggestive of fraud. Patterns in data that can be indicative of fraud can have one or more of the following characteristics:
Unusual data values which deviate from the norm in some way
Unusual relationships among data values and/or records
Changes in the behavior of those involved in the transactions.
Characteristic Data Mining Technique
Unusual data Outlier Analysis;
Frequency of occurrence;
Cluster Analysis;
Algorithms
Unusual relationships Outlier Analysis;
Frequency of occurrence;
Cluster Analysis;
Link Analysis.
Changes in behavior Outlier Analysis;
Frequency of occurrence.
These are some of the data mining techniques for detecting fraudulent transactions having above characteristics.
4. OUR IMPLEMENTATION-ENHANCED DC-1 ALGORITHM
4.1 DC-1 FRAMEWORK
Our approach to building a fraud detection system is to classify individual transactions as fraudulent and legitimate. In sum, the problem comprises three questions, corresponding to a component in the framework. The questions are:
1.Which transactions are important? Which features or combination of features are useful for distinguishing legitimate behavior from fraudulent ones?
2. How should profiles be created? Given an important feature, how should we
characterize/profile the behavior of a credit card holder with respect to the feature, in order to notice important changes?
3. When should alarms be issued? Given the results of profiling behavior based on multiple criteria, how should they be combined to be effective in determining when fraud has occurred?
FIRST DETECTOR CONSTRUCTOR FRAMEWORK
TRANSACTIONS
RULES
MONITOR
TEMPLATES
PROFILING
MONITORS
4.2 EXPLANATION
The Detector Constructor framework (DC-1) starts with analyzing available transaction records including fraudulent transactions.
(1) CLASSIFICATION RULE LEARNING
First, based on the given history of an account, transactions of an account are analyzed and labeled as fraudulent transactions and legitimate (non-fraudulent) transactions. The local set of rules for the account is searched. For example, for one specific account, the following classification rule is devised
(No. of transactions>=restricted no. of transactions) AND (amount of transactions> Threshold value) = Fraud transaction.
However, it is required to have a set of rules, a priori rules that can perform as fraud indicators, since the rules generated are specific to one single account. In order to generate rules that can apply to as many accounts as possible, this algorithm is devised, controlled by two parameters such as Trules and Taccts. Trules is defined as a threshold on the number of rules required to cover each account, and Taccts is defined as the number of accounts which a rule must have been found in to be selected at all. After an account is examined with a certain number of rules and a rule is applied to a certain number of accounts, a rule is selected. The list of rules generated from each account is reviewed.
Finally, the rule that appears the most frequently from the list of the entire account set is chosen.
(2) CONSTRUCTION OF PROFILING MONITORS
After rules are selected, a set of monitors is built. The purpose of profiling monitors is to investigate the sensitivities of accounts to general rules. The construction of profiling monitors consists of two stages, a profiling stage and a usage stage. In the profiling stage, a general rule is applied to a portion of an account’s legitimate usage to evaluate the account’s normal activities. In other words, legitimate activities of an account are summarized into profiling monitors through the use of templates. The statistics of the account’s normal activities is saved to that account. Later, in the usage stage, the monitor is applied to the whole part of the account (i.e. account for a month). The resulting statistics can be used to examine the abnormality of the usage of the account per month.
During this process, the profiling monitors are built by the monitor constructor, which is a set of templates. These templates examine the conditions of the rules. Based on the result of it, each rule-template is finally derived as a profiling monitor. For example, templates are made up with various statistical expressions such as a threshold monitor and a standard deviation monitor. In the threshold monitor, binary categorizations are made according to whether the user’s behavior of a day exceeds the threshold defined with the portion of a day. Also, in the standard deviation monitor, different output values are defined according to how much the user’s behavior in that month deviates from the rule’s condition defined in that year.
(3) COMBINATION OF EVIDENCE FROM THE MONITORS
To improve the confidence of the detection, monitors are combined with evidence resulted from the application of monitors to the sample data. For example, monitors generated are applied to a sample account for a month, and their outputs, whether fraudulent activities are detected or not, are expressed as a result vector for that month. The evidence about the account for that month, whether the account month truly has frauds or not, is introduced together with the outputs. Then, the outputs are weighted with the combination of evidence. Also, the combination of evidence is trained with the threshold value based on the sum of weights. Hence, it is possible to put more confidence on monitors with larger weights to prevent false alarms. After all, there may exist redundant and ineffective rules.
To reduce the number of monitors, it proposes the use of a sequential forward selection process. Finally, fraud detectors are selected from monitors combined with evidence.
4.3 THE ENHANCED DC-1 ALGORITHM
We have enhanced the DC-1 algorithm by first clustering the data sets using K-means algorithm and then applying DC-1 technique. The rules Ra generated in the DC-1 algorithm is also clustered using K-means algorithm.
Clustering is a popular approach to implementing the partitioning operation. Clustering methods partition a set of objects into clusters such that objects in the same cluster are more similar to each other than objects in different clusters according to some defined criteria. The k-means algorithm is well known for its efficiency in clustering large data sets.
4.3.1 K-means procedure
Given a set of numeric objects X and an integer number k (<=n), the k-means algorithm searches for a partition of X into k clusters that minimizes the within groups sum of squared errors. This process is often formulated as the following mathematical program problem P
k n
Minimize P (W, Q) = wi, l d(Xi , Ql)
l=1 i=1
k
subject to wi, l = 1, 1<= i <= n
l=1
wi, l 0,1 , 1<= i <=n, 1<= l<=k
where W is an n k partition matrix, Q = Q1, Q2, . . . , QK is a set of objects in the same object domain, and d(. , .) is the squared Euclidean distance between two objects.
4.3.2 THE FINAL ENHANCED ALGORITHM
Given:
Accts: set of all accounts obtained after clustering using K-means (i.e.) the set Q.
Rules: set of all fraud rules generated from Accts
Trules : (parameter) Number of rules required to cover each account
Taccts : (parameter) Number of accounts in which a rule must have been found
Output:
S: set of selected rules.
1. /*Initialization*/
2. S = {};
3. for (a Accts) do Cover[a] = 0;
4. for (r Rules) do
5. Occur[r] = 0; /*Number of accounts in which r occurs*/
6. AcctsGen[r] = {}; /*Set of accounts generating r */
7. end for
8. /* Set up Occur and AcctsGen */
9. for (a Accts) do
10. Ra = set of rules generated from a;
11. for (r Ra) do
12. Occur[r] : = Occur[r] + 1;
13. add a to AcctsGen[r];
14. end for; end for
15. Call K-means procedure to cluster Ra ; /* rules are clustered here using K-
means */
16. /* Cover Accts with Rules */
17. for (a Accts) do
18. Ra = list of rules generated from a;
19. sort Ra by Occur;
20. while (cover[a] < Trules) do
21. r := highest-occurrence rule from Ra
22. Remove r from Ra
23. if (r S and Occur[r] Taccts ) then
24. add r to S;
25. for (a2 AcctsGen[r]) do
26. Cover[a2] = Cover[a2] + 1;
27. end for; end if
28. end while; end for
4.4 ADVANTAGES OF ENHANCED DC-1 ALGORITHM
It is efficient in processing large data sets.
It often terminates at a local optimum.
It works only on numeric values.
The clusters have convex shapes i.e. they are bell shaped curves.
So it is easier to find the maximum cover of rules generated to distinguish legitimate from fraudulent transactions.
5. OUR SUGGESTIONS FOR CONTROLLING ONLINE FRAUD
1. Do Mod10 algorithm testing. Mod10 is an algorithm that will tell you if the card number being presented could be a valid card number. It doesn't mean that number was ever issued, or that the card number is an active account, but it will tell you whether the digits the customer typed in could be in the range of valid credit card numbers issued by the major credit card companies. This test should be the first test applied to any credit card number you process. If the card fails Mod10, it will fail all other attempts to authenticate and process a charge against the card.
2. Obtain an authorization and AVS check on every transaction. When a merchant processes a credit card transaction, normally they must receive an authorization for the amount of the order. This usually guarantees that the card is a valid card number and that the person has available credit for the amount being requested. The credit card companies make available AVS (Address Verification Service), which you can use to further verify the validity of the card. AVS matches the billing address provided by the customer with the zip code held on file at the issuing bank. While there are numerous reasons why the card may fail AVS (recent change of address, AVS computers down, etc.), an AVS failure should be a red flag that needs further investigation.
3. Be extremely wary of orders where the shipping and billing addresses are not the same. Obviously if you are in a business that sells items traditionally given as gifts (flowers would be an example) this may be difficult but if the majority of your customers bill to the same address they shop to, be cautious of orders that are being shipped to a different address.
4. All newly issued credit and debit cards carry a 3 digit non-embossed number on the back of the card. This number is not included in the data contained on the magnetic stripe of the card and is not printed on credit card statements or anywhere else.
5. Pay extra attention to orders that are for amounts greater than the norm or consist mostly of one type of item. Criminals trying to commit fraud will often place large orders for specific items that they know they can resell easily. For instance, if you sell DVDs and you receive an order to 25 of the same title, you should investigate further. Customers who place multiple small orders should draw your attention as well. Some criminals are aware that cautious merchants scrutinize large transactions so the criminal simply places many smaller orders rather than one large one.
6. Be suspicious of orders that are placed for rush or expedited delivery. Since criminals aren't paying the shipping fees they normally don't care about the extra cost and they want the order shipped as quickly as possible. The longer the order sits around before shipping the greater the chance the fraud will be uncovered.
7. Any order consisting mostly or entirely of high ticket items should receive extra scrutiny. High-ticket items usually have a high resell value so they tend to be on the shopping list of many criminals.
8. Be alert of orders that originate from email addresses issued by free hosting providers like yahoo.com, hotmail.com, etc. Many sites simply will not accept orders from email address originating at free hosting providers.
9. Keep an eye out for orders from multiple accounts/credit-card-numbers being shipped to the same delivery address. This may indicate a drop box or drop location where criminals are having orders delivered to.
10. Orders being shipped to an international address should earn a closer inspection. Pay particular attention if the card or the shipping address is in an area prone to credit card fraud.
11. Watch for multiple orders being placed over a short period of time. Many criminals will attempt to run up a card before the owner finds out or in the case of a stolen identity before the first bill arrives.
12. Pick up the phone. If you have any suspicions about an order call the contact phone number given by the customer and attempt to confirm the details of the order. If you still don't feel comfortable, call the issuing bank and ask to confirm the account details.
6. OUR SUGGESTION FOR FRAUD PREVENTION
Once again we stress that fraud prevention describes measures to stop fraud occurring in the first place. This can be done by some of the biometric techniques like
Finger prints
Iris recognition
Facial recognition
Out of these we suggest that iris recognition can be used efficiently in credit card
fraud prevention because iris code (binary code ) can be easily stored in the credit card and easily detected through credit card sensing machines where a camera should be attached.
. When a person stands before the credit card sensing machine, his iris is captured through the camera and converted into a binary code, which should match the original code stored in the card. If not, there will be no further transactions and hence there is no chance for fraud occurring in the first place.
7. CONCLUSION
Fraud is a deliberate deception to obtain assets or resources. In the digital world where speed and anonymity reign, this deception is costly and pervasive. Several criteria can be used when fraud is to be detected. Our algorithm and our suggestive steps for online fraud detection satisfies the following criteria:
For cost-management, fraud-screening mechanisms should be internal to the processing system, not outsourced to a third-party.
Fraud programs must be independently accessible and adaptable to the changing needs of the merchant.
For effective fraud screening, the process should be multi-tiered such that there are multiple levels of approval required prior to dispatch for final authorization.
Real-time transaction reports should be easily and independently accessible to the merchant.
Systems that refer to databases of past and present fraudulent cards and customer information provide tremendous value to merchants and should be part of the program.
The responsibility and risk of fraud is multi-faceted. Merchants, financial institutions and e-payment processors must be vigilant about fraud prevention, and must work in tandem to insure the continued success and growth of this new commerce frontier.
REFERENCES
1. Data Mining and Knowledge Discovery in Databases,
http://www.cs.sfu.ca/research/groups/DB/sections/publication/kdd/kdd.html
2. Burge, P. and Shawe-Taylor, J. (1997). Detecting cellular fraud using adaptive
prototypes. AAAI Workshop: AI Approaches to Fraud Detection and Risk
Management, 9-13
3. Ralambondrainy, H. 1995. A conceptual version of the k-means algorithm. Pattern Recognition Letters, 16:1147–1157.
AIDE-D-VOIX HOME AUTOMATION THROUGH SPEECH RECOGNITION
AIDE-D-VOIX
Abstract
Home automation is the technology that enhances the interactivity and autonomy of a home. It is a field with potential explosive growth due to the recent rapid improvements in computing power.
Speech recognition is the ability of a computer system to respond accurately to verbal commands. Speech recognition makes use of specific AI (artificial intelligence) rules to determine what words the speaker is speaking. Speech recognition programs, allows people to give command and enter data using their voices rather than a mouse or keyboard.
Objective
The main aim of designing this software is to provide a tool of accessibility to individuals who have physical or cognitive difficulties, impairments, and disabilities.
A software program is developed for recognizing the speech commands. It derives the input from the user in form of speech then recognizes it and performs according to the conditions specified in the code and corresponding appliance is activated.
“VOICE OUT YOUR SILENT THOUGHTS”
Introduction
Home Automation through speech recognition is the basic concept of designing this product. A software product is developed to control the home appliance through speech commands. The user gives the speech command and it has to be recognised by the system. Then after the speech command is identified, and then comparison is done with the commands given in the source code. If the condition satisfies then a signal is generated to the control corresponding appliance.
What is Home Automation?
Home automation is a field within building automation, specializing in the specific automation requirements of private homes and in the application of automation techniques for the comfort and security of its residents. Although many techniques used in building automation (such as light and climate control, control of doors and window shutters, security and surveillance systems, etc.) are also used in home automation, additional functions in home automation include the control of multi-media home entertainment systems, automatic plant watering and pet feeding, and automatic scenes for dinners and parties.
The main difference between building automation and home automation is, however, the human interface. When home automation is installed during construction of a new home, usually control wires are added before the drywall is installed. These control wires run to a controller, which will then control the environment.
What is Speech Recognition?
Speech recognition (in many contexts also known as automatic speech recognition, computer speech recognition or erroneously as voice recognition) is the process of converting a speech signal to a set of words, by means of an algorithm implemented as a computer program.Speech recognition systems, depending on several different factors, could have a wide performance range as measured by word error rate. These factors include the environment, the speaking rate of the speaker, the context (or the grammar) being used in recognition. Speech recognition presently comes in two styles: discrete speech and continuous speech. The older technology, discrete speech recognition, operates by requiring that the user speak one - word - at - a - time. A newer technology, continuous speech recognition, allows the user to dictate by speaking at a more or less normal rate of speech. Most speech recognition users would tend to agree that dictation machines can achieve very high performance in controlled conditions. Part of the confusion mainly comes from the mixed usage of the term speech recognition and dictation.
Speaker-dependent dictation systems requiring a short period of training can capture continuous speech with a large vocabulary at normal pace with a very high accuracy.
Now-a-days , speaker independent softwares with higher efficiency have been developed. Such softwares requires very less training and has a vast collection od database. A partial trainng requiring only very less time is needed for further increasing its efficiency to the maximum.
Most commercial companies claim that recognition software can achieve between 98% to 99% accuracy (getting one to two words out of one hundred wrong) if operated under optimal conditions. The process of speech recognition:
Who Makes Speech Recognition Software?
There are many developers involved in speech recognition, from hardware manufacturers who are trying to build the feature directly into computer hardware, to software developers who write packages for existing PCs. Among the many participants are:
Dragon Systems. Dragon Systems is the leading developer of software-based speech-recognition systems, with several different packages available depending on the user's needs. Dragon System’s Dragon Naturally Speaking Deluxe Edition includes the ability to recognize continuous speech.
IBM. IBM is another leading speech recognition software developer. With IBM’s software, you can surf the Internet hands-free.
Microsoft. Microsoft is working on speech recognition and is trying to build it directly into the operating system so any new PC would automatically be speech-recognition ready.
What Are The System Requirements?
At a minimum, you need:
A Pentium 133 MHz processor
32 MB of RAM
A high-quality, noise-reducing microphone
A 16 bit sound card
Applications of Speech Recognition
Home Automation
Command recognition - Voice user interface with the computer
Dictation
Interactive Voice Response
Medical Transcripion
Pronunciation Teaching in compter-aided language learning applications
Automatic Translation etc…
Why choose Speech Recognition?
The capabilities of speech recognition technology are developing at a remarkable pace. Speech recognition technologies that were only dreamed of a few years ago are now available for everyone at a reasonable cost. Speech recognition on personal computers as an aid for people with mobility impairments. the advantages of speech recognition are not convenience but out of necessity. Many people with disabilities who use the technology go from complete dependence on others, to having the independence necessary to complete their work hands-free.
Benefits Of Speech Recognition:
Speech recognition offers many benefits, from fast, accurate data entry to help for people with disabilities. These benefits include:
Protects Against Repetitive Stress Injury. If you do not have to type or use a mouse, at least not very often, then you are less likely to receive a repetitive stress injury like carpal tunnel syndrome.
Frees Your Hands and Eyes for Other Jobs. If you do not have to sit in front of your PC and watch the screen or keyboard, you are free to read your notes while working, pace the room while dictating, or write and stretch at the same time.
Aids in Data Entry. Speech recognition allows a typing rate that averages around 45 to 65 words per minute; with some users achieve scores as high as 90 words a minute. A data entry clerk may type 80 words per minute, but cannot do it all day without pausing to walk around and stretch, so the human and speech-recognition rates can average out to the about the same number. Additionally, speech recognition nearly always finds the exact word, so it’s spelling is nearly always correct.
Provides PC Access for People with Disabilities. For those users who physically cannot type, or have problems writing or using a keyboard, speech recognition not only opens up the world of computers, it can help open up.
Cost Savings. Although they initially cost several thousand dollars, today's speech-recognition packages start at less than a hundred dollars and go up from there. Considering the cost of a well-trained legal or medical secretary, the cost of work-related injuries, and the benefits of increased productivity, speech-recognition software can quickly pay for itself.
Requirements
PC
Microphone
Speech recognition engine
Interrupt Inpout32.dll
Relay Circuit (+5V relay)
Appliance
Methodology
A typical complete speech recognition process consists of the following parts: (1) sound converter (2) Fragmentation, (3) Recognition.
Sound Acquisition: The user voice is captured with the help of mic in a handset.
Sound Conversion: The digital sound captured by the sound card through a mic is converted in to a more manageable format. The converter translates the stream of amplitudes that form the digital sound wave in to its frequency components. It is still a digital representation, but more akin to what a human ear really perceives.
Fragmentation: The next stage is the identification of phonemes - the elementary sound that is building blocks of words. Each frequency component of sound is mapped to a specific phoneme. This process actually finishes the conversion from sounds to words.
Recognition: The final step is to analyze the string. A grammar, the list of words known to the program, lets the engine to associate the phoneme with particular word.
Recognition procedure is divided into two consecutive stages depending on data set and test set. It consists of two stages:
(1) Training (for data set)
(2) Comparison and Classification (for test set)
(1) Training: The words, which have to be recognized, need to be added in the database provided in the software. The words can be dynamically added to the database. Some level of training is required to be done for accurate recognition.
(2) Comparison: At this stage, comparison is done with the help of the generated word and the words on the program. Based on the result the appropriate function is performed.
System Architecture
Theory: Components of Architecture
The components in the architecture are:
1.PC
2.Circuit & Relay
3.Appliance.
PC: There are three essential components in the PC:
1.User program
2.Speech recognition engine
3.Interrupt.
Speech recognition engine: The recogniser used here is speaker-independent software. It has an inbuilt database containing very large number of words in it. The user program activates the recognition engine when it recognizes a sound. Initially the desired words for the control of each appliance provided in the user program are loaded in to the engine database. The engine splits the received string in to phonemes and tries to group the homophones together from vast collection of words in the database. These grouped words are then compared with a set of words, which are already saved in the database. If there occurs a match, then that word is returned to the user program.
User program: Then the checking is performed between the returned string from engine & a set of options in user program. If the condition satisfies, the signal containing the data is sent to port using the interrupt INPOUT32.dll. This signal is transferred to the relay circuit, which then forwards the activation or deactivation signal to the appliance.
Interfacing PC – INPOUT32.DLL:
Works seam less with all versions of windows (WIN 98, NT, 2000 and XP)
Using a kernel mode driver embedded in the dll.
No special software or driver installation required.
Driver will be automatically installed and configured automatically when the dll is loaded.
No special APIs required only two functions Inp32 and Out32
Can be easily used with VC++ and VB
The two functions exported from inpout32.dll are
1) 'Inp32', reads data from a specified parallel port register.
2) 'Out32', writes data to specified parallel port register.
After the signal is generated from the interrupt and sent to port then, it is transferred to the relay circuit, which in turn passes it to the corresponding appliance.
What is a port?
A port contains a set of signal lines that the CPU sends or receives data with other components. We use ports to communicate via modem, printer, keyboard, mouse etc. In signaling, open signals are "1" and close signals are "0" so it is like binary system. A parallel port sends 8 bits and receives 5 bits at a time. The serial port RS-232 sends only 1 bit at a time but it is multidirectional so it can send 1 bit and receive 1 bit at a time...
Parallel Port: We use parallel port here, since simultaneous control so maximum of eight appliances is possible.
Signal
BIT
PIN
Direction
-Strobe
¬C0
1
Output
+Data Bit 0
D0
2
Output
+Data Bit 1
D1
3
Output
+Data Bit 2
D2
4
Output
+Data Bit 3
D3
5
Output
+Data Bit 4
D4
6
Output
+Data Bit 5
D5
7
Output
+Data Bit 6
D6
8
Output
+Data Bit 7
D7
9
Output
-Acknowledge
S6
10
Input
+Busy
¬S7
11
Input
+Paper End
S5
12
Input
+Select In
S4
13
Input
-Auto Feed
¬C1
14
Output
-Error
S3
15
Input
-Initialize
C2
16
Output
-Select
¬C3
17
Output
Ground
-
18-25
Ground
Circuit & Relay:
Inverting the input
: In the Windows operating system, the signal is sent to all the ports while booting so at that time appliances should not be activated to avoid it signal is inverted. For activation and deactivation of appliance corresponding coding is written. For inverting, NOT (IC 7404) gate is used.
Amplifying the input: The voltage (~ +2.3V) coming out of PC is low to activated relay (+5V). So, we require an amplifying circuit to increase the voltage. The circuit is designed as follows:
Relay: A relay is an electrically operated switch. Current flowing through the coil of the relay creates a magnetic field, which attracts a lever and changes the switch contacts. The coil current can be on or off so relays have two switch positions and they are double throw (changeover) switches. According to the voltage which it receives and thereby controls the appliance activation / deactivation.
The relay's switch connections are usually labeled COM, NC and NO:
COM = Common, always connect to this; it is the moving part of the switch.
NC = Normally Closed, COM is connected to this when the relay coil is off.
NO = Normally Open, COM is connected to this when the relay coil is on.
Appliance: Thus the signal is sent to the appliance. The appliance that can be activated in normal 230V supply can be automised in this system.
Work Plan
PHASE I:
In the Speech Recognition phase, the speech engine recognizes user speech commands and accuracy of the software is tested.
PHASE II:
After the user commands have been recognized, the user program generates the wireless signal from the PC.
PHASE III:
The signal is generated by the interrupt containing the data, which is to be passed to the relay circuit.
PHASE IV:
The amplifying circuits and the electronic switches/relays are designed.
PHASE V:
Activation or deactivation of the appliance.
Future Enhancements
Wireless automation
Multi lingual Speech Mode
Number of appliance can be increased
Conclusion
This supports effective controlling of appliances by the disabled. This method of automising the home appliances serves as a communication aid to the mobility-impaired people.
References
[1] J. Neto, N. Mamede, R. Cassaca, L. Oliveira, "The development of a multi-purpose Spoken Dialogue System", Proc. Eurospeech 03, Genéve, Swiss, 2003.
[2] H. Meinedo, D. Caseiro, J. Neto and I. Trancoso, "AUDIMUS.MEDIA a Broadcast News speech recognition system for the European Portuguese language", Proc.PROPOR'03, Faro, Portugal, 2003.
[3] S. Paulo and L. Oliveira, "Multilevel Annotation Of Speech Signals Using Weighted Finite State Transducers", Proc. 2002 IEEE Workshop on Speech Synthesis, Santa Monica, USA, 2002.
Angers, France, 2003.
[4] D. Traum, "Speech Acts for Dialogue Agents", UMIACS,
Univ. Maryland, USA, 1999.
Abstract
Home automation is the technology that enhances the interactivity and autonomy of a home. It is a field with potential explosive growth due to the recent rapid improvements in computing power.
Speech recognition is the ability of a computer system to respond accurately to verbal commands. Speech recognition makes use of specific AI (artificial intelligence) rules to determine what words the speaker is speaking. Speech recognition programs, allows people to give command and enter data using their voices rather than a mouse or keyboard.
Objective
The main aim of designing this software is to provide a tool of accessibility to individuals who have physical or cognitive difficulties, impairments, and disabilities.
A software program is developed for recognizing the speech commands. It derives the input from the user in form of speech then recognizes it and performs according to the conditions specified in the code and corresponding appliance is activated.
“VOICE OUT YOUR SILENT THOUGHTS”
Introduction
Home Automation through speech recognition is the basic concept of designing this product. A software product is developed to control the home appliance through speech commands. The user gives the speech command and it has to be recognised by the system. Then after the speech command is identified, and then comparison is done with the commands given in the source code. If the condition satisfies then a signal is generated to the control corresponding appliance.
What is Home Automation?
Home automation is a field within building automation, specializing in the specific automation requirements of private homes and in the application of automation techniques for the comfort and security of its residents. Although many techniques used in building automation (such as light and climate control, control of doors and window shutters, security and surveillance systems, etc.) are also used in home automation, additional functions in home automation include the control of multi-media home entertainment systems, automatic plant watering and pet feeding, and automatic scenes for dinners and parties.
The main difference between building automation and home automation is, however, the human interface. When home automation is installed during construction of a new home, usually control wires are added before the drywall is installed. These control wires run to a controller, which will then control the environment.
What is Speech Recognition?
Speech recognition (in many contexts also known as automatic speech recognition, computer speech recognition or erroneously as voice recognition) is the process of converting a speech signal to a set of words, by means of an algorithm implemented as a computer program.Speech recognition systems, depending on several different factors, could have a wide performance range as measured by word error rate. These factors include the environment, the speaking rate of the speaker, the context (or the grammar) being used in recognition. Speech recognition presently comes in two styles: discrete speech and continuous speech. The older technology, discrete speech recognition, operates by requiring that the user speak one - word - at - a - time. A newer technology, continuous speech recognition, allows the user to dictate by speaking at a more or less normal rate of speech. Most speech recognition users would tend to agree that dictation machines can achieve very high performance in controlled conditions. Part of the confusion mainly comes from the mixed usage of the term speech recognition and dictation.
Speaker-dependent dictation systems requiring a short period of training can capture continuous speech with a large vocabulary at normal pace with a very high accuracy.
Now-a-days , speaker independent softwares with higher efficiency have been developed. Such softwares requires very less training and has a vast collection od database. A partial trainng requiring only very less time is needed for further increasing its efficiency to the maximum.
Most commercial companies claim that recognition software can achieve between 98% to 99% accuracy (getting one to two words out of one hundred wrong) if operated under optimal conditions. The process of speech recognition:
Who Makes Speech Recognition Software?
There are many developers involved in speech recognition, from hardware manufacturers who are trying to build the feature directly into computer hardware, to software developers who write packages for existing PCs. Among the many participants are:
Dragon Systems. Dragon Systems is the leading developer of software-based speech-recognition systems, with several different packages available depending on the user's needs. Dragon System’s Dragon Naturally Speaking Deluxe Edition includes the ability to recognize continuous speech.
IBM. IBM is another leading speech recognition software developer. With IBM’s software, you can surf the Internet hands-free.
Microsoft. Microsoft is working on speech recognition and is trying to build it directly into the operating system so any new PC would automatically be speech-recognition ready.
What Are The System Requirements?
At a minimum, you need:
A Pentium 133 MHz processor
32 MB of RAM
A high-quality, noise-reducing microphone
A 16 bit sound card
Applications of Speech Recognition
Home Automation
Command recognition - Voice user interface with the computer
Dictation
Interactive Voice Response
Medical Transcripion
Pronunciation Teaching in compter-aided language learning applications
Automatic Translation etc…
Why choose Speech Recognition?
The capabilities of speech recognition technology are developing at a remarkable pace. Speech recognition technologies that were only dreamed of a few years ago are now available for everyone at a reasonable cost. Speech recognition on personal computers as an aid for people with mobility impairments. the advantages of speech recognition are not convenience but out of necessity. Many people with disabilities who use the technology go from complete dependence on others, to having the independence necessary to complete their work hands-free.
Benefits Of Speech Recognition:
Speech recognition offers many benefits, from fast, accurate data entry to help for people with disabilities. These benefits include:
Protects Against Repetitive Stress Injury. If you do not have to type or use a mouse, at least not very often, then you are less likely to receive a repetitive stress injury like carpal tunnel syndrome.
Frees Your Hands and Eyes for Other Jobs. If you do not have to sit in front of your PC and watch the screen or keyboard, you are free to read your notes while working, pace the room while dictating, or write and stretch at the same time.
Aids in Data Entry. Speech recognition allows a typing rate that averages around 45 to 65 words per minute; with some users achieve scores as high as 90 words a minute. A data entry clerk may type 80 words per minute, but cannot do it all day without pausing to walk around and stretch, so the human and speech-recognition rates can average out to the about the same number. Additionally, speech recognition nearly always finds the exact word, so it’s spelling is nearly always correct.
Provides PC Access for People with Disabilities. For those users who physically cannot type, or have problems writing or using a keyboard, speech recognition not only opens up the world of computers, it can help open up.
Cost Savings. Although they initially cost several thousand dollars, today's speech-recognition packages start at less than a hundred dollars and go up from there. Considering the cost of a well-trained legal or medical secretary, the cost of work-related injuries, and the benefits of increased productivity, speech-recognition software can quickly pay for itself.
Requirements
PC
Microphone
Speech recognition engine
Interrupt Inpout32.dll
Relay Circuit (+5V relay)
Appliance
Methodology
A typical complete speech recognition process consists of the following parts: (1) sound converter (2) Fragmentation, (3) Recognition.
Sound Acquisition: The user voice is captured with the help of mic in a handset.
Sound Conversion: The digital sound captured by the sound card through a mic is converted in to a more manageable format. The converter translates the stream of amplitudes that form the digital sound wave in to its frequency components. It is still a digital representation, but more akin to what a human ear really perceives.
Fragmentation: The next stage is the identification of phonemes - the elementary sound that is building blocks of words. Each frequency component of sound is mapped to a specific phoneme. This process actually finishes the conversion from sounds to words.
Recognition: The final step is to analyze the string. A grammar, the list of words known to the program, lets the engine to associate the phoneme with particular word.
Recognition procedure is divided into two consecutive stages depending on data set and test set. It consists of two stages:
(1) Training (for data set)
(2) Comparison and Classification (for test set)
(1) Training: The words, which have to be recognized, need to be added in the database provided in the software. The words can be dynamically added to the database. Some level of training is required to be done for accurate recognition.
(2) Comparison: At this stage, comparison is done with the help of the generated word and the words on the program. Based on the result the appropriate function is performed.
System Architecture
Theory: Components of Architecture
The components in the architecture are:
1.PC
2.Circuit & Relay
3.Appliance.
PC: There are three essential components in the PC:
1.User program
2.Speech recognition engine
3.Interrupt.
Speech recognition engine: The recogniser used here is speaker-independent software. It has an inbuilt database containing very large number of words in it. The user program activates the recognition engine when it recognizes a sound. Initially the desired words for the control of each appliance provided in the user program are loaded in to the engine database. The engine splits the received string in to phonemes and tries to group the homophones together from vast collection of words in the database. These grouped words are then compared with a set of words, which are already saved in the database. If there occurs a match, then that word is returned to the user program.
User program: Then the checking is performed between the returned string from engine & a set of options in user program. If the condition satisfies, the signal containing the data is sent to port using the interrupt INPOUT32.dll. This signal is transferred to the relay circuit, which then forwards the activation or deactivation signal to the appliance.
Interfacing PC – INPOUT32.DLL:
Works seam less with all versions of windows (WIN 98, NT, 2000 and XP)
Using a kernel mode driver embedded in the dll.
No special software or driver installation required.
Driver will be automatically installed and configured automatically when the dll is loaded.
No special APIs required only two functions Inp32 and Out32
Can be easily used with VC++ and VB
The two functions exported from inpout32.dll are
1) 'Inp32', reads data from a specified parallel port register.
2) 'Out32', writes data to specified parallel port register.
After the signal is generated from the interrupt and sent to port then, it is transferred to the relay circuit, which in turn passes it to the corresponding appliance.
What is a port?
A port contains a set of signal lines that the CPU sends or receives data with other components. We use ports to communicate via modem, printer, keyboard, mouse etc. In signaling, open signals are "1" and close signals are "0" so it is like binary system. A parallel port sends 8 bits and receives 5 bits at a time. The serial port RS-232 sends only 1 bit at a time but it is multidirectional so it can send 1 bit and receive 1 bit at a time...
Parallel Port: We use parallel port here, since simultaneous control so maximum of eight appliances is possible.
Signal
BIT
PIN
Direction
-Strobe
¬C0
1
Output
+Data Bit 0
D0
2
Output
+Data Bit 1
D1
3
Output
+Data Bit 2
D2
4
Output
+Data Bit 3
D3
5
Output
+Data Bit 4
D4
6
Output
+Data Bit 5
D5
7
Output
+Data Bit 6
D6
8
Output
+Data Bit 7
D7
9
Output
-Acknowledge
S6
10
Input
+Busy
¬S7
11
Input
+Paper End
S5
12
Input
+Select In
S4
13
Input
-Auto Feed
¬C1
14
Output
-Error
S3
15
Input
-Initialize
C2
16
Output
-Select
¬C3
17
Output
Ground
-
18-25
Ground
Circuit & Relay:
Inverting the input
: In the Windows operating system, the signal is sent to all the ports while booting so at that time appliances should not be activated to avoid it signal is inverted. For activation and deactivation of appliance corresponding coding is written. For inverting, NOT (IC 7404) gate is used.
Amplifying the input: The voltage (~ +2.3V) coming out of PC is low to activated relay (+5V). So, we require an amplifying circuit to increase the voltage. The circuit is designed as follows:
Relay: A relay is an electrically operated switch. Current flowing through the coil of the relay creates a magnetic field, which attracts a lever and changes the switch contacts. The coil current can be on or off so relays have two switch positions and they are double throw (changeover) switches. According to the voltage which it receives and thereby controls the appliance activation / deactivation.
The relay's switch connections are usually labeled COM, NC and NO:
COM = Common, always connect to this; it is the moving part of the switch.
NC = Normally Closed, COM is connected to this when the relay coil is off.
NO = Normally Open, COM is connected to this when the relay coil is on.
Appliance: Thus the signal is sent to the appliance. The appliance that can be activated in normal 230V supply can be automised in this system.
Work Plan
PHASE I:
In the Speech Recognition phase, the speech engine recognizes user speech commands and accuracy of the software is tested.
PHASE II:
After the user commands have been recognized, the user program generates the wireless signal from the PC.
PHASE III:
The signal is generated by the interrupt containing the data, which is to be passed to the relay circuit.
PHASE IV:
The amplifying circuits and the electronic switches/relays are designed.
PHASE V:
Activation or deactivation of the appliance.
Future Enhancements
Wireless automation
Multi lingual Speech Mode
Number of appliance can be increased
Conclusion
This supports effective controlling of appliances by the disabled. This method of automising the home appliances serves as a communication aid to the mobility-impaired people.
References
[1] J. Neto, N. Mamede, R. Cassaca, L. Oliveira, "The development of a multi-purpose Spoken Dialogue System", Proc. Eurospeech 03, Genéve, Swiss, 2003.
[2] H. Meinedo, D. Caseiro, J. Neto and I. Trancoso, "AUDIMUS.MEDIA a Broadcast News speech recognition system for the European Portuguese language", Proc.PROPOR'03, Faro, Portugal, 2003.
[3] S. Paulo and L. Oliveira, "Multilevel Annotation Of Speech Signals Using Weighted Finite State Transducers", Proc. 2002 IEEE Workshop on Speech Synthesis, Santa Monica, USA, 2002.
Angers, France, 2003.
[4] D. Traum, "Speech Acts for Dialogue Agents", UMIACS,
Univ. Maryland, USA, 1999.
BIOINFORMATICS
ABSTRACT
Bioinformatics blends Computer Science, Biology and Chemistry together. It uses computers and techniques developed in Computer Science to solve many problems in Biology and Chemistry. Several applications include molecule and protein modeling, protein sequence alignment, protein folding, rational drug design and database searching to cull information from large genomes and protein databanks. Proteins interact with each other in most reactions that occur in the body. They are involved in everything from DNA transcription and replication to viral protection to energy consumption and distribution among the cells. Understanding how a protein interacts with other proteins and how it functions is crucial in understanding how the body works and functions. There are three main types of protein-protein interactions:
Protein-protein interactions
Protein-DNA interactions
Interactions between Monomers of Multimericproteins
Our goal is to study these reactions and to simulate them. Motion planning techniques are good at computing paths when the robot has many degrees of freedom. Several algorithms, especially Probabilistic Roadmap Methods (PRMs) and their variations have a lot of success in a situation where the robot is complex.
Modeling Protein-Protein Interactions with the Aid of Motion Planning Algorithms
1.Introduction
Bioinformatics has been receiving a lot of attention lately from the research community. Bioinformatics is increasing in popularity because it has applications in all facets of life, and it is a relatively new field with very fertile ground.
Protein-Protein interactions occur between two or more proteins. Some examplesare the interaction involving GroEL and GroES to aid in protein folding, the interaction between calmodulin any myosin to produce muscle contraction, and protein/antibody binding. Protein-DNA interactions involve a protein and a piece of DNA. This situation occurs mostly in DNA replication. Finally, some proteins are made yp of several chains or loops. These chains are intertwined and interact with each other to dictate how the protein folds, its function, and how it interacts with other things. For example, HIV-1 protease is make up of two chains (or monomers) that move with each other.
The difficulty is that proteins have hundreds to thousands of degrees of freedom. Even when assumptions are made and their structures are simplified, they are still very complex and difficult to simulate in a reasonable amount of time. Because of their complexity, most simulations consider proteins as rigid objects. This is an unreasonable assumption because some proteins are known to undergo large conformational changes. (They exhibit large movements during interactions.) They should be considered flexible, not rigid.
By considering the protein to be an articulated robot (a robot with several links), we can apply the same techniques developed for robot motion planning to protein simulation.
2.Evidence that Proteins are Flexible
Proteins do undergo large conformational changes. The rigid assumption in grossly inadequate in some case.
2.1 GroEL/GroES Complex
Proteins may make “bad” connections when trying to fold to their native state. These “bad:” connectections, or aggregates, can cause the protein to function improperly. Chaperones can prevent and reverse such “bad: connections by binding to and releasing the unfolded or aggregated protein during the folding process. Chaperones do not increase the rate of protein folding, they only increase its efficiency.
GroEL and GroES work together (interact) to help proteins fold into their native state properly. They do this by surrounding the protein like a cage thereby providing a sage envorinment for the protein. GroES binds to the top of GroEL and forms a cage around the protein. During this interaction, GroEL undergoes large conformational changes. Pictures of these proteins (obtained from the Protein Data Bank and viewed through RasMol) are shown in Figure 2.
Figure 1 : GroEL/ GroES are two chaperones that work together to increase the efficiency of protein folding. The top “cap” is GroES and the bottom two rings is GroEL.
(a) (b)
Figure 2: GroEL is shown before (a) and after (b) binding to GroES. GroES is removed for clarity. GroEL undergoes large conformational changes during the binding process. It stretches upwards and twists in the presence of GroES.
2.2 DNA Polymerases
DNA is made up of two helices. They are designed in such a way that given one helix, the other helix can be easily determined. During DNA replication, these two helices are split apart. For each helix, the cell creates the other half. The cell taken one piece of DNA and made a copy of it. DNA polymerases catalyze this process.
DNA polymerase I (Pol I) was the first enzyme discovered to help synthesize DNA. Pol I has three main functions: acts as a polymerase, acts as an exonuclease in the 3’_5’ direction, and acts as an exonuclease in the 5’_3’ direction. As a polymerase, it helps create the second helix by binding the correct bases. As an exonuclease, it can correct its mistakes. The exonuclease activity is like proofreading.
This enzyme is shaped like a hand(called the Klenov fragment) see Figure 3. When it functions as a polymerase, it binds to the DNA just like you would grab a rod with your hand. When it functions as exonuclease 3’_5’, the protein undergoes a large conformational change and forms another cleft perpendicular to the cleft that contains the polymerase site. There is yet another binding site for the third functions, exonuclease 5’->3’.
The DNA also changes conformation during interaction with Pol I. As Pol I “grabs” the DNA, it blends it about 80 degrees. This is large enough that it is no longer realistic to consider it a rigid object. The DNA, as well as the protein, must be thought of as flexible.
Figure 3: DNA Polymerase I is shaped like a hand – shown in blue. It “grabs” the piece of DNA during DNA replication when it functions as a polymerase. The protein is shown both space-filled (a) and as a ribbon (b).
2.3 Calmodulin
Calmodulin regulates many important functions in the body by reacting to changingcalcium (Ca2+) levels. For example, it interacts with myosin to perform muscle contractions in the body. Its sensitivity to calcium levels is due to its readiness to bind to calcium. Calmodulin has two globular domains connected by a single alpha heliz(see Figure 4, b). Each globular domain contains 2 Ca2+ binding sites. Calmodulin undergoes large conformational changes when bound to Ca2+. Also, when bound to myosin, the globular domains remain relatively unchanged, but the alpha helix connecting them unwinds and contains a sharp bend. The drastic change in conformation is mainly due to the change in the alpha helix.
3. The Project
The goal of this research is to simulate interactions between proteins. As discussed above, proteins are complex, dynamic structures. Some motion planning algorithms have had much success in computing paths for very complex robots. We want to apply these techniques from robotics to protein-protein interaction simulation.
3.1 Background
One class of motion planning algorithms, probabilistic Roadmap Methods (PRMs), have been ver successful in computing paths where the robot has many degrees of Freedom in a reasonable amount of time.
Figure 4: Calmodulin is shown in its unbound (a) and bound (b) states. This large conformational change is due to the central alpha helix as it winds and unwinds.
Although PRMs are not complete (i.e. the are not guaranteed to find a path if one exists), they are able to find solutions to many problems quickly. Complete algorithms do exist, but they are prohibitively long and computationally expensive. PRMs sacrifice completeness for speed.
PRMs build a roadmap through the robot’s configuration space (C-space) that the robot can use to navigate its environment. A configuration is a unique position and orientation of the robot. The C-space consists of all configurations, valid or not. In robotics applications, a valid configuration is considered to be one that is entirely collision-free. The roadmap is much like a state highway map. It consists of nodes (cities) and edges (streets).
Roadmap construction consists of two phases, node generation and node connection. During node generation, nodes are created that form the basis of the roadmap. These nodes can be generated in a number of ways. The traditional PRM generated these uniformly at random. These are easy to compute and provide good coverage of the C-space. Variations of the traditional PRM used other methods to generate nodes. During node connection, PRM tries to connect each node with its k closest neighbors via some local planner. Variations of PRM use different methods to connect the nodes and different local planners.
Once the roadmap is build, a path between any start configuration and goal configuration is easily found. First the start and the goal are connected to the roadmap. Then the roadmap is searched for the shortest path between the two nodes using a graph search algorithm.
One particular variation of PRMs are focused on single-query planning. Instead of building a roadmap that can solve multiple queries, or start and goal pairs, they tailor the roadmap for one particular query. Tqo similar methods were developed independently at lowa State Of University and Stanford University, Steve LaValle’s Rapidly exploring Random Trees (RRT) and David Hsu’s planner for expansive configuration spaces. Both of these methods grow a roadmap from the start towards the goal and from the goal towards the start until they meet.
RRT alternates node generation and node connection as it expands, or grows, the roadmap. First a node is generated at random. This node specifies the direction of expansion. First a node is generated at random. This node specifies the direction of expansion. Then the algorithm selects the closest node to the new node. It makes a small step from this node towards the direction node. If it is collision-free, it adds this step to the roadmap and connects it to the node it began from. After each new node and new edge is added, the algorithm checks to see if the goal has been reached or, in the case of two trees, if the trees meet each other. This process is repeated until a solution is found. Since the growth is biased by random nodes, the expansion tends to be a global one, pulling the roadmap out into unexplored regions of the C-space.
Hsu’s algorithm differs in how expansion is biased. Instead of picking a random node and walking towards its, his algorithm picks a node, x, in the tree based on some probability. Then several new nodes are generated in the neighborhood of x. some of these nodes are kept and only added to the roadmap if an edge exists between it and x. again, after each new node and new edge is added to the roadmap, the algorithm checks if the goal has been reached or the two tree meet. This algorithm implements local expansion through neighborhoods while RRT uses a global expansion by random sampling.
3.2 Biology Considerations
Motion planning algorithms were designed for robotics applications. With just a description of the robot, the environment, and collision-checker, difficult motion planning problems can be solved with ease. These same techniques, although originally intended for robots, can be applied to proteins.
Proteins are made up of atoms and bonds. Each atom can be modeled as a sphere, and each bond can be modeled as a rod that connects two atoms together. The protein can be considered to be an articulated robot, or one with multiple links. Here, the bonds are the robot’s links and the atoms are the robot’s joints.
Linked robots move based on changes in joint angles, see Figure 5, (a). A joint angle is the angle between two consecutive links. The number of joint angles plus the position and orientation of the base are the robot’s degrees of freedom.
Proteins behave slightly differently. Chemists have discovered that bond lengths and bond angles (the angle between two consecutive bonds) do not change significantly during conformational changes. We can safely assume that the bond lengths and bond angles are fixed. Torsional angles, on the other hand, do change significantly when the protein’s conformation changes. It is the contribution factor to changes in
Figure 5: (a) Linked manipulators move based on their joint angles, (b) Likewise, proteins move based on their torsional angles.
conformation, see Figure 5,(b). The number of torsional angles plus the position and orientation of the root atom are the protein’s degrees of freedom.
The goal of motion planning algorithms is to produce feasible paths for the robot. These paths must be realistic. In order for a path to be feasible, at every point along the path the robot must be collision-free. (A collision-free robot is one that is not colliding with itself or any other obstacle in the environment.)
The same principle holds true for proteins. Any computed path must be feasible so the simulation is realistic. The notion of a valid/feasible configuration is more complicated for a protein than for a physical robot. Not only must the protein be collision-free, it must also be energetically reasonable. In nature, protein conformations typically have low energies. The same must be true for computed conformations.
This property can be easily included in the collision-checker. Instead of merely checking for collision, the collision-checker will now also check the energy. By modeling proteins as articulated robots and including the energy function in the collision checker, the same motion planning algorithms developed for robots can be directly applied to proteins.
Unfortunately, exact energy calculations are very time consuming. They would be inappropriate to include in a collision-checker that is called thousands of times during roadmap creation. Some assumptions can be made to reduce the running time of the energy calculations. First, we will only consider the van der waals energy. Electrostatic energy and torsional energy make up a small part of the total energy, so they can be safely neglected.
To compute the vander waals energy, every pair of atoms must be considered. Atoms that are close together contribute largely to the total energy. Likewise, atoms that are far apart do not contribute a whole lot to the total energy. We can approximate the van der waals energy by neglecting pairs of atoms that are far apart. To do this, we impose a cutoff distance (8 degree A in our case) then the energy for that pair is not computer. This reduces the running time of the energy calculation and still gives a good approximation of the energy.
In summary, motion planning algorithms like PRM and RRT can be applied to protein-protein interactions by modeling the protein as an articulated robot and including the energy function in the collision-checker. If the energy calculation is accurate and efficient, then these motion planning algorithms will perform well on proteins and could provide greater insight into how proteins move and function.
3.3 Basic Algorithm
Our algorithm for modeling protein-protein interactions is an extension of RRT and Hsu’s planner for expansive C-spaces with one large variation. Instead of building trees, we will build graphs. This way we can look at many different paths from the start (unbound state) to the goal (bound state) instead of just one solution. This will improve the quality of the “best” solution and increase our understanding of protein protein interactions.
The first concern is to generate energetically feasible nodes to build the roadmap with. Because the conformational space is n-dimensional, it is nearly impossible to perform a systematic search. If we just generate the nodes randomly, the likelihood that we will find “good” nodes is very small. To combat these difficulties, we first generate a node randomly and then perform a gradient descent to minimize its energy.
Performing an exact gradient descent is too time consuming to consider Instead, we approximate a gradient descent. To do this we first generate a random node near the original node and compute its energy. If its energy is less than the original node, we declare it to be the new minimum and replace the original node. It its energy is greater than the original node, we throw it away. This process is repeated many times, typically anywhere from 10 to 30 iterations.
To generate a random nearby node, we first select a few rotable bonds at random. We then apply a small, random displacement in the torsional angles of these bonds. The resulting confirmation is randomly generated, but very similar to the original conformation.
Now we have all the building blocks to implement a variation of RRT and Hsu’s algorithm. Roadmap construction is as follows:
BUILD ROADMAPS(qstart, qgoal)
1.Rstart .init(qstart)
2.Rgoal .init(qgoal)
3.for n=1 to N do
4.EXTEND(Rstart)
5.EXTEND(Rgoal)
6.CONNECT MAPS(Rstart, Rgoal)
EXTEND(R)
1.qorig_SELECT NODE(R)
2.qnew_GENERATE`NEIGHBOR(qorig)
3.MINIMIZE ENERGY(qnew)
4.R.add noe(qnew)
5.R.add edges(qnew)
We build two roadmaps, one rooted at the start conformation and one rooted at the goal conformation. During each iteration, each roadmap is extended. Then the algorithm attempts to connect the two roadmaps together. We are looking for multiple paths, so we do not stop the algorithm once CONNECT MAPS() is successful. If we were looking to save time and only compute one path, we would stop the algorithm as soon as CONNECT MAPS() is successful and a path is found.
The EXTEND() method simply generated a new nearby node, minimizes its energy, and then adds it to the roadmap. Then add edges() checks for connections between the new node and its k closest neighbors. This allows us to build a graph, instead of a tree, and search for multiple solution paths.
When edges are checked for validity, every node’s energy along that edge is computed. We can use this information to compute an edge weight. The simplest scheme is to let the edge weight equal the sum of all node energies along that edge. This gibes a higher weight to paths with higher energies. It also gives longer edges a higher weight. This may unduly bias the algorithm to look for shorter paths, ignoring longer paths that may be energetically feasible. To avoid this, the average energy along the edge could be stored as the edge weight instead.
As long as higher weights correspond to edges with high energies and lower weights correspond to edges with low energies, edge weights can identify the most energetically feasible paths. A graph search that looks for a path with the lowest total weight would pull out the most energetically feasible, or “best”, path.
3.4 Implementation
So fat, we have implemented several pieces of the algorithm in C++. We began with a framework developed by Ming Zhang, a postdoc in Dr. Kavraki’s research group. This framework supplied a working representation of molecules and proteins. We are using the Atomgroup Local Frames approach, developed by Zhang to quickly calculate the new xyz positions of the atoms. An atomgroup is simply a group of atoms that remain fixed relative to each other. Such groups may be rings, protein sidechains, or atoms connected by non rotable bonds.
The code can input and output files in the mol2 format1. we selected this format for a number of reasons. First, it is very intuitive to use and code. Second, Rasmol, a free molecule visualization tool, works with mol2 files. Finally, and most importantly, we can utilize the thousands of protein structures stored in the protein databank (PDB). These structures are stored as pdb files, but since they are not intuitive, we use Syblyl2 to convert them to the mol2 file format.
1. The mol2 file format developed by Tripos is used by many biochemists.
2. sibyl is an extensive tool for visualization and biological computation developed by Tripos.
With Paul Murphrey’s help, another member of Dr. Kavraki’s group, we have added basic energy calculations to the implementation. These calculations compute the van der waals energy of the molecule. This energy calculation uses a distance cutoff of 8 degrees A. This distance is standard in the biochemistry community. To save computation time, the energy derived from pairs of atoms in the same atom group is only computed in the first energy calculation. Since these atoms do not move relative to each other, their van der waals energy is constant. Depending on how the atom groups are defined, this can greatly reduce computation time.
We have also implemented the GENERATE NEIBHOR() and MINIMIZE ENERGY() methods. These were fairly straight forward to implement but need some optimization work.
3.Future Research
The next step is to put all the pieces together to develop the entire algorithm. The robotics group at Texas A&M University has a good implementation of a PRM framework. To have a working algorithm, all that is left is to integrate the pieces developed this summer into their PRM framework.
As mentioned earlier, some optimization work is needed to reduce the running time. An obvious approach is to compute energy calculations in parallel. Also, every node along an edge must be checked for validity to add that edge to the roadmap. Checking a node is independent of the other nodes, so this can be done in parallel, giving a node(or set of node) to each processor. Since most of the running time is spent in checking validity and computing energies, these improvements will produce a significant reduction in running time. We need to look into other ways to parallelize the code.
Once the entire algorithm is implemented and working, we can look at many different protein-protein interactions. We would like to first consider the calmodulin/myosin interaction for two reasons. First, calmodulin undergoes large conformational changes, the exact situation our research is targeting. Second, it is already known how calmodulin moves to bind to myosin. We can use this information to test the validity of our results.
This paper provides a good foundation for future research. This research will provide insight into how proteins move and function. It will mainly be used to study how proteins interact with each other and other biological substances in the body. This knowledge has the power to impact the bioinformatics community as a whole, especially pharmaceutical drug design and molecular modeling.
Bioinformatics blends Computer Science, Biology and Chemistry together. It uses computers and techniques developed in Computer Science to solve many problems in Biology and Chemistry. Several applications include molecule and protein modeling, protein sequence alignment, protein folding, rational drug design and database searching to cull information from large genomes and protein databanks. Proteins interact with each other in most reactions that occur in the body. They are involved in everything from DNA transcription and replication to viral protection to energy consumption and distribution among the cells. Understanding how a protein interacts with other proteins and how it functions is crucial in understanding how the body works and functions. There are three main types of protein-protein interactions:
Protein-protein interactions
Protein-DNA interactions
Interactions between Monomers of Multimericproteins
Our goal is to study these reactions and to simulate them. Motion planning techniques are good at computing paths when the robot has many degrees of freedom. Several algorithms, especially Probabilistic Roadmap Methods (PRMs) and their variations have a lot of success in a situation where the robot is complex.
Modeling Protein-Protein Interactions with the Aid of Motion Planning Algorithms
1.Introduction
Bioinformatics has been receiving a lot of attention lately from the research community. Bioinformatics is increasing in popularity because it has applications in all facets of life, and it is a relatively new field with very fertile ground.
Protein-Protein interactions occur between two or more proteins. Some examplesare the interaction involving GroEL and GroES to aid in protein folding, the interaction between calmodulin any myosin to produce muscle contraction, and protein/antibody binding. Protein-DNA interactions involve a protein and a piece of DNA. This situation occurs mostly in DNA replication. Finally, some proteins are made yp of several chains or loops. These chains are intertwined and interact with each other to dictate how the protein folds, its function, and how it interacts with other things. For example, HIV-1 protease is make up of two chains (or monomers) that move with each other.
The difficulty is that proteins have hundreds to thousands of degrees of freedom. Even when assumptions are made and their structures are simplified, they are still very complex and difficult to simulate in a reasonable amount of time. Because of their complexity, most simulations consider proteins as rigid objects. This is an unreasonable assumption because some proteins are known to undergo large conformational changes. (They exhibit large movements during interactions.) They should be considered flexible, not rigid.
By considering the protein to be an articulated robot (a robot with several links), we can apply the same techniques developed for robot motion planning to protein simulation.
2.Evidence that Proteins are Flexible
Proteins do undergo large conformational changes. The rigid assumption in grossly inadequate in some case.
2.1 GroEL/GroES Complex
Proteins may make “bad” connections when trying to fold to their native state. These “bad:” connectections, or aggregates, can cause the protein to function improperly. Chaperones can prevent and reverse such “bad: connections by binding to and releasing the unfolded or aggregated protein during the folding process. Chaperones do not increase the rate of protein folding, they only increase its efficiency.
GroEL and GroES work together (interact) to help proteins fold into their native state properly. They do this by surrounding the protein like a cage thereby providing a sage envorinment for the protein. GroES binds to the top of GroEL and forms a cage around the protein. During this interaction, GroEL undergoes large conformational changes. Pictures of these proteins (obtained from the Protein Data Bank and viewed through RasMol) are shown in Figure 2.
Figure 1 : GroEL/ GroES are two chaperones that work together to increase the efficiency of protein folding. The top “cap” is GroES and the bottom two rings is GroEL.
(a) (b)
Figure 2: GroEL is shown before (a) and after (b) binding to GroES. GroES is removed for clarity. GroEL undergoes large conformational changes during the binding process. It stretches upwards and twists in the presence of GroES.
2.2 DNA Polymerases
DNA is made up of two helices. They are designed in such a way that given one helix, the other helix can be easily determined. During DNA replication, these two helices are split apart. For each helix, the cell creates the other half. The cell taken one piece of DNA and made a copy of it. DNA polymerases catalyze this process.
DNA polymerase I (Pol I) was the first enzyme discovered to help synthesize DNA. Pol I has three main functions: acts as a polymerase, acts as an exonuclease in the 3’_5’ direction, and acts as an exonuclease in the 5’_3’ direction. As a polymerase, it helps create the second helix by binding the correct bases. As an exonuclease, it can correct its mistakes. The exonuclease activity is like proofreading.
This enzyme is shaped like a hand(called the Klenov fragment) see Figure 3. When it functions as a polymerase, it binds to the DNA just like you would grab a rod with your hand. When it functions as exonuclease 3’_5’, the protein undergoes a large conformational change and forms another cleft perpendicular to the cleft that contains the polymerase site. There is yet another binding site for the third functions, exonuclease 5’->3’.
The DNA also changes conformation during interaction with Pol I. As Pol I “grabs” the DNA, it blends it about 80 degrees. This is large enough that it is no longer realistic to consider it a rigid object. The DNA, as well as the protein, must be thought of as flexible.
Figure 3: DNA Polymerase I is shaped like a hand – shown in blue. It “grabs” the piece of DNA during DNA replication when it functions as a polymerase. The protein is shown both space-filled (a) and as a ribbon (b).
2.3 Calmodulin
Calmodulin regulates many important functions in the body by reacting to changingcalcium (Ca2+) levels. For example, it interacts with myosin to perform muscle contractions in the body. Its sensitivity to calcium levels is due to its readiness to bind to calcium. Calmodulin has two globular domains connected by a single alpha heliz(see Figure 4, b). Each globular domain contains 2 Ca2+ binding sites. Calmodulin undergoes large conformational changes when bound to Ca2+. Also, when bound to myosin, the globular domains remain relatively unchanged, but the alpha helix connecting them unwinds and contains a sharp bend. The drastic change in conformation is mainly due to the change in the alpha helix.
3. The Project
The goal of this research is to simulate interactions between proteins. As discussed above, proteins are complex, dynamic structures. Some motion planning algorithms have had much success in computing paths for very complex robots. We want to apply these techniques from robotics to protein-protein interaction simulation.
3.1 Background
One class of motion planning algorithms, probabilistic Roadmap Methods (PRMs), have been ver successful in computing paths where the robot has many degrees of Freedom in a reasonable amount of time.
Figure 4: Calmodulin is shown in its unbound (a) and bound (b) states. This large conformational change is due to the central alpha helix as it winds and unwinds.
Although PRMs are not complete (i.e. the are not guaranteed to find a path if one exists), they are able to find solutions to many problems quickly. Complete algorithms do exist, but they are prohibitively long and computationally expensive. PRMs sacrifice completeness for speed.
PRMs build a roadmap through the robot’s configuration space (C-space) that the robot can use to navigate its environment. A configuration is a unique position and orientation of the robot. The C-space consists of all configurations, valid or not. In robotics applications, a valid configuration is considered to be one that is entirely collision-free. The roadmap is much like a state highway map. It consists of nodes (cities) and edges (streets).
Roadmap construction consists of two phases, node generation and node connection. During node generation, nodes are created that form the basis of the roadmap. These nodes can be generated in a number of ways. The traditional PRM generated these uniformly at random. These are easy to compute and provide good coverage of the C-space. Variations of the traditional PRM used other methods to generate nodes. During node connection, PRM tries to connect each node with its k closest neighbors via some local planner. Variations of PRM use different methods to connect the nodes and different local planners.
Once the roadmap is build, a path between any start configuration and goal configuration is easily found. First the start and the goal are connected to the roadmap. Then the roadmap is searched for the shortest path between the two nodes using a graph search algorithm.
One particular variation of PRMs are focused on single-query planning. Instead of building a roadmap that can solve multiple queries, or start and goal pairs, they tailor the roadmap for one particular query. Tqo similar methods were developed independently at lowa State Of University and Stanford University, Steve LaValle’s Rapidly exploring Random Trees (RRT) and David Hsu’s planner for expansive configuration spaces. Both of these methods grow a roadmap from the start towards the goal and from the goal towards the start until they meet.
RRT alternates node generation and node connection as it expands, or grows, the roadmap. First a node is generated at random. This node specifies the direction of expansion. First a node is generated at random. This node specifies the direction of expansion. Then the algorithm selects the closest node to the new node. It makes a small step from this node towards the direction node. If it is collision-free, it adds this step to the roadmap and connects it to the node it began from. After each new node and new edge is added, the algorithm checks to see if the goal has been reached or, in the case of two trees, if the trees meet each other. This process is repeated until a solution is found. Since the growth is biased by random nodes, the expansion tends to be a global one, pulling the roadmap out into unexplored regions of the C-space.
Hsu’s algorithm differs in how expansion is biased. Instead of picking a random node and walking towards its, his algorithm picks a node, x, in the tree based on some probability. Then several new nodes are generated in the neighborhood of x. some of these nodes are kept and only added to the roadmap if an edge exists between it and x. again, after each new node and new edge is added to the roadmap, the algorithm checks if the goal has been reached or the two tree meet. This algorithm implements local expansion through neighborhoods while RRT uses a global expansion by random sampling.
3.2 Biology Considerations
Motion planning algorithms were designed for robotics applications. With just a description of the robot, the environment, and collision-checker, difficult motion planning problems can be solved with ease. These same techniques, although originally intended for robots, can be applied to proteins.
Proteins are made up of atoms and bonds. Each atom can be modeled as a sphere, and each bond can be modeled as a rod that connects two atoms together. The protein can be considered to be an articulated robot, or one with multiple links. Here, the bonds are the robot’s links and the atoms are the robot’s joints.
Linked robots move based on changes in joint angles, see Figure 5, (a). A joint angle is the angle between two consecutive links. The number of joint angles plus the position and orientation of the base are the robot’s degrees of freedom.
Proteins behave slightly differently. Chemists have discovered that bond lengths and bond angles (the angle between two consecutive bonds) do not change significantly during conformational changes. We can safely assume that the bond lengths and bond angles are fixed. Torsional angles, on the other hand, do change significantly when the protein’s conformation changes. It is the contribution factor to changes in
Figure 5: (a) Linked manipulators move based on their joint angles, (b) Likewise, proteins move based on their torsional angles.
conformation, see Figure 5,(b). The number of torsional angles plus the position and orientation of the root atom are the protein’s degrees of freedom.
The goal of motion planning algorithms is to produce feasible paths for the robot. These paths must be realistic. In order for a path to be feasible, at every point along the path the robot must be collision-free. (A collision-free robot is one that is not colliding with itself or any other obstacle in the environment.)
The same principle holds true for proteins. Any computed path must be feasible so the simulation is realistic. The notion of a valid/feasible configuration is more complicated for a protein than for a physical robot. Not only must the protein be collision-free, it must also be energetically reasonable. In nature, protein conformations typically have low energies. The same must be true for computed conformations.
This property can be easily included in the collision-checker. Instead of merely checking for collision, the collision-checker will now also check the energy. By modeling proteins as articulated robots and including the energy function in the collision checker, the same motion planning algorithms developed for robots can be directly applied to proteins.
Unfortunately, exact energy calculations are very time consuming. They would be inappropriate to include in a collision-checker that is called thousands of times during roadmap creation. Some assumptions can be made to reduce the running time of the energy calculations. First, we will only consider the van der waals energy. Electrostatic energy and torsional energy make up a small part of the total energy, so they can be safely neglected.
To compute the vander waals energy, every pair of atoms must be considered. Atoms that are close together contribute largely to the total energy. Likewise, atoms that are far apart do not contribute a whole lot to the total energy. We can approximate the van der waals energy by neglecting pairs of atoms that are far apart. To do this, we impose a cutoff distance (8 degree A in our case) then the energy for that pair is not computer. This reduces the running time of the energy calculation and still gives a good approximation of the energy.
In summary, motion planning algorithms like PRM and RRT can be applied to protein-protein interactions by modeling the protein as an articulated robot and including the energy function in the collision-checker. If the energy calculation is accurate and efficient, then these motion planning algorithms will perform well on proteins and could provide greater insight into how proteins move and function.
3.3 Basic Algorithm
Our algorithm for modeling protein-protein interactions is an extension of RRT and Hsu’s planner for expansive C-spaces with one large variation. Instead of building trees, we will build graphs. This way we can look at many different paths from the start (unbound state) to the goal (bound state) instead of just one solution. This will improve the quality of the “best” solution and increase our understanding of protein protein interactions.
The first concern is to generate energetically feasible nodes to build the roadmap with. Because the conformational space is n-dimensional, it is nearly impossible to perform a systematic search. If we just generate the nodes randomly, the likelihood that we will find “good” nodes is very small. To combat these difficulties, we first generate a node randomly and then perform a gradient descent to minimize its energy.
Performing an exact gradient descent is too time consuming to consider Instead, we approximate a gradient descent. To do this we first generate a random node near the original node and compute its energy. If its energy is less than the original node, we declare it to be the new minimum and replace the original node. It its energy is greater than the original node, we throw it away. This process is repeated many times, typically anywhere from 10 to 30 iterations.
To generate a random nearby node, we first select a few rotable bonds at random. We then apply a small, random displacement in the torsional angles of these bonds. The resulting confirmation is randomly generated, but very similar to the original conformation.
Now we have all the building blocks to implement a variation of RRT and Hsu’s algorithm. Roadmap construction is as follows:
BUILD ROADMAPS(qstart, qgoal)
1.Rstart .init(qstart)
2.Rgoal .init(qgoal)
3.for n=1 to N do
4.EXTEND(Rstart)
5.EXTEND(Rgoal)
6.CONNECT MAPS(Rstart, Rgoal)
EXTEND(R)
1.qorig_SELECT NODE(R)
2.qnew_GENERATE`NEIGHBOR(qorig)
3.MINIMIZE ENERGY(qnew)
4.R.add noe(qnew)
5.R.add edges(qnew)
We build two roadmaps, one rooted at the start conformation and one rooted at the goal conformation. During each iteration, each roadmap is extended. Then the algorithm attempts to connect the two roadmaps together. We are looking for multiple paths, so we do not stop the algorithm once CONNECT MAPS() is successful. If we were looking to save time and only compute one path, we would stop the algorithm as soon as CONNECT MAPS() is successful and a path is found.
The EXTEND() method simply generated a new nearby node, minimizes its energy, and then adds it to the roadmap. Then add edges() checks for connections between the new node and its k closest neighbors. This allows us to build a graph, instead of a tree, and search for multiple solution paths.
When edges are checked for validity, every node’s energy along that edge is computed. We can use this information to compute an edge weight. The simplest scheme is to let the edge weight equal the sum of all node energies along that edge. This gibes a higher weight to paths with higher energies. It also gives longer edges a higher weight. This may unduly bias the algorithm to look for shorter paths, ignoring longer paths that may be energetically feasible. To avoid this, the average energy along the edge could be stored as the edge weight instead.
As long as higher weights correspond to edges with high energies and lower weights correspond to edges with low energies, edge weights can identify the most energetically feasible paths. A graph search that looks for a path with the lowest total weight would pull out the most energetically feasible, or “best”, path.
3.4 Implementation
So fat, we have implemented several pieces of the algorithm in C++. We began with a framework developed by Ming Zhang, a postdoc in Dr. Kavraki’s research group. This framework supplied a working representation of molecules and proteins. We are using the Atomgroup Local Frames approach, developed by Zhang to quickly calculate the new xyz positions of the atoms. An atomgroup is simply a group of atoms that remain fixed relative to each other. Such groups may be rings, protein sidechains, or atoms connected by non rotable bonds.
The code can input and output files in the mol2 format1. we selected this format for a number of reasons. First, it is very intuitive to use and code. Second, Rasmol, a free molecule visualization tool, works with mol2 files. Finally, and most importantly, we can utilize the thousands of protein structures stored in the protein databank (PDB). These structures are stored as pdb files, but since they are not intuitive, we use Syblyl2 to convert them to the mol2 file format.
1. The mol2 file format developed by Tripos is used by many biochemists.
2. sibyl is an extensive tool for visualization and biological computation developed by Tripos.
With Paul Murphrey’s help, another member of Dr. Kavraki’s group, we have added basic energy calculations to the implementation. These calculations compute the van der waals energy of the molecule. This energy calculation uses a distance cutoff of 8 degrees A. This distance is standard in the biochemistry community. To save computation time, the energy derived from pairs of atoms in the same atom group is only computed in the first energy calculation. Since these atoms do not move relative to each other, their van der waals energy is constant. Depending on how the atom groups are defined, this can greatly reduce computation time.
We have also implemented the GENERATE NEIBHOR() and MINIMIZE ENERGY() methods. These were fairly straight forward to implement but need some optimization work.
3.Future Research
The next step is to put all the pieces together to develop the entire algorithm. The robotics group at Texas A&M University has a good implementation of a PRM framework. To have a working algorithm, all that is left is to integrate the pieces developed this summer into their PRM framework.
As mentioned earlier, some optimization work is needed to reduce the running time. An obvious approach is to compute energy calculations in parallel. Also, every node along an edge must be checked for validity to add that edge to the roadmap. Checking a node is independent of the other nodes, so this can be done in parallel, giving a node(or set of node) to each processor. Since most of the running time is spent in checking validity and computing energies, these improvements will produce a significant reduction in running time. We need to look into other ways to parallelize the code.
Once the entire algorithm is implemented and working, we can look at many different protein-protein interactions. We would like to first consider the calmodulin/myosin interaction for two reasons. First, calmodulin undergoes large conformational changes, the exact situation our research is targeting. Second, it is already known how calmodulin moves to bind to myosin. We can use this information to test the validity of our results.
This paper provides a good foundation for future research. This research will provide insight into how proteins move and function. It will mainly be used to study how proteins interact with each other and other biological substances in the body. This knowledge has the power to impact the bioinformatics community as a whole, especially pharmaceutical drug design and molecular modeling.
NANO TECHNOLOGY
NANO TECHNOLOGY
(NANO TECHNOLOGY OF CHIP DESIGNING THROUGH BIOLOGY)
ABSTRACT
Nano technology is slowly creeping into electronic chips shrinking them to infinitesimal units. So far it was discovered that carbon in the form of carbon Nano tubes only could be used to achieve this technology. The design and fabrication of carbon nano tube based chips requires ultra high precision equipment working in dangerous and expensive industrial environments where one slightest flaw results in an adverse outcome. For such a vital situation biotechnology stands with promising solutions. It solves using simple biological molecules and microbes, which can themselves, assemble electronic materials and build Nano scale circuits. They can even self-correct while growing the materials. Biotechnological approach first involves selection of the Peptides that recognize a particular semiconductor material like gallium arsenide or indium phosphide and even one crystallographic face versus another with a very high specificity. These peptides are used to engineer a special virus called Bacteriophage. These engineered viruses are reproduced by a cyclic process called Directed evolution. By engineering the virus’s structure, it acts as template for making Nano transistors, Nano wires and Quantum dots. Here Nano transistors formed are of FET type. Quantum dots are basic functional blocks in circuits. By regular arrangements of these quantum dots, Nano wires are formed. Using these tools Nano circuits are made. Using a property of virus called Self-assembly, which states that when viruses are separated from water, they form multi layer films lining up shoulder-to-shoulder forming liquid crystal. This biological approach revolutionizes the field of chip design and fabrication as the chips can be designed at ordinary room temparatures in aqueous, nontoxic conditions than that of other nano wire fabrication methods. This revolution result in high-density quantum flash memories, foldable display screens using viral films, nano robots, micro computers in body for diagnosis and treatment and quantum era bringing super fast quantum computers and quantum communication systems at very low cost.
BIOTECHNOLOGY’S ASSURANCE FOR NANOTECHNOLOGY
The explorations and necessities in science and technology are gradually wiping off the barriers and amalgamating different sectors of science, which appear to be un-correlated. One such ground breaking and innovative exploration is chip designing using biotechnology. If this becomes a success there would be major break through in the field of nano technology.
With the speed and accuracy of the circuits increasing, the complexity of the circuits increases with crumbling the size of chips. This increases the need for ultra high precision and sophisticated equipment, which is very expensive and laborious. Even though the equipment is used, the efficiency of the systems is uncertain when compared with the investment over it. Hence there is a need to look for an alternative approach, which promotes Nano technology. Biotechnology has some promising solutions for this.
INTRODUCTION
Living creatures produce the most complex molecular structures known to science. Crafted over ages by natural selection, these 3-dimensional arrangements of atoms manifest a precision and fidelity, not to mention minuteness far beyond the capabilities of current technologies. Under the direction of Genes encoded in DNA, cells construct proteins that put together the fine structures necessary for life. By altering the genetic codes of these cells, they can be used for various applications. This briefly illustrates biology’s promise in furthering Nano technology, the manufacture of circuits and devices only billionths of a meter in size.
Genetic engineers are evolving nano technology tools by picking the best molecules among the variants found in large populations over several generations. It was discovered that a special type of virus could be used for achieving this. By implementing mainly two processes on the virus, they can be used in developing tools and circuits at nano scale. The processes are 1) Directed evolution 2) Self-assembly
ROLE OF PEPTIDES IN THIS APPROACH
In this approach peptides play an important role in the fabrication of chips from viruses. Peptides are small proteins made of short chain of amino acids. There are a billion different peptides from which some peptides possess special property of affinity towards certain materials of interest. Peptides that have a high affinity for some particular semiconductor material are selected. Here initially, peptides having affinity towards materials like zinc sulphide and gold are considered and used to genetically engineer the virus.
DESCRIPTION AND ENGINEERING OF VIRUS
The virus used here is a special type of virus called bacteriophage. It gets its name form its property of afflicting bacteria, which is used here to amplify its population. It resembles a very fat, rocket shaped pencil, with a major protein coat of one type of peptide along its painted shaft and another type as minor protein coat on the five tentacle like structures at the “eraser” end, -each controlled by its own genes. It has a length of 880 nanometers and a width of 6.6 nanometers. Inside the major protein coat there is a single strain of DNA, which controls the nature of virus. The DNA consists of three genes and the gene-3 is the peptide that is present on the minor protein coat. So we have a billion different viruses that are all genetically similar, except that they differ from each other based on a small peptide on each end. The reason for selecting bacteriophage is not only due to their physical structure but also due to their property of multiplying and modifying rapidly in the lab. When a solution contains all the one billion possible viruses with random peptide insert at the ends of them, the solution is termed as phage library. Usually water is used as the solution to act as a medium for the viruses.
The peptide coats can be independently genetically engineered with different peptides of interest, which ultimately changes the physical and adhering properties of virus towards certain materials, which is a vital property used in the chip-designing process.
DIRECTED EVOLUTION
Directed evolution is used to describe the techniques for the iterative production, evaluation and selection of variants of a biological sequence. It mimics the process of natural selection.
Directed evolution basically involves testing the phage library, selecting the specific engineered viruses and amplifying the population of engineered viruses retaining the original properties, which are desired.
TARGETING PURE SEMICONDUCTOR CRYSTALS
First the phage library, of the peptide, which has affinity towards the substance of interest, is selected. Then the phage library is exposed to the substance of interest, such as a wafer of Zinc sulphide. Those viruses whose minor protein coats have a natural affinity for the material, with their eraser ends stick to the wafer. Then the chip is washed with dilute acid or a chemical bath. Phages that do not bind well to the wafer are washed away and the remaining phages that are sticking to the wafer are removed by a process called PH-Elution. The removed phage population is amplified by infecting with bacteria. Again the produced viruses are poured back onto the chip and subjected to a more stringent i.e. more acidic or basic wash. After several cycles of such washes in which the wash becomes stronger and stronger, the viruses sticking to the wafer are those whose peptide coats have a tight fit to the semiconductor crystal. Similarly several rounds of directed evolution are performed to pick out viruses that bind to crystals of Gallium arsenide and Indium phosphide, the two semi conductors employed in high frequency communications chips. The produced viruses act as versatile templates in the fabrication of chips.
FOUNDATION FOR NANO CIRCUITS, NANO WIRES & NANO RINGS
The viruses can, not only stick to the semiconductor materials, but also can grow nano scale semiconductor crystals over them. Consider first the viruses, which are engineered on some locations on the major protein coat with the peptide, which has affinity towards the semiconductor of interest. Now these, when mixed with precursor chemicals containing the semiconductor’s elemental ingredients, the virus’s engineered peptides act as a template, hustling atoms into the same crystal structure to which the peptides were engineered to bind. The result is an organic and inorganic hybrid with viral particles of 7nm wide, 800nmlong, and 2 to 3nm semiconductor crystals wherever an engineered peptide is found. If the entire protein coat is engineered with the peptide, semiconductor crystals are formed all over the shaft of the virus. These crystals are called Quantum dots and these are the fundamental components of nano scale circuits. The core of the quantum dot is usually contained with zinc sulphide, which has a higher electronic band gap. This improves the confinement of the electron-hole pairs and helps in conduction. After the formation of quantum dots all over the shaft, the virus is subjected to a process called annealing which removes the virus’s organic framework. High-temperature annealing removes the organic materials of virus, leaving the inorganic quantum dots to collapse into the space formerly occupied by the virus to form a single crystal, which results in solid Nano wire. Varying the length of the virus before the start of the process can produce Nano wires of variable lengths.
By suitable peptide engineering, Viruses can be selected to form quantum dots of the semiconductors Zinc sulphide and cadmium sulphide and the magnetic materials cobalt-platinum and iron-platinum from which corresponding Nano wires are made.
For making a Nano ring initially two different genetic modifications are done to the bacteriophage, one at the eraser end and one at the pencil tip end. Then the two ends are linked together with another molecule to form a Nano ring. Then similar procedure as that of a nano wire using precursor chemicals of cobalt. Finally resulting tiny magnetic rings of cobalt particles smaller than 100 nanometers. They find applications in storing magnetic information at a high speed only at room temperature.
CONSTRUCTION OF NANO TRANSISTOR
Transistor fabrication is another important part in the process of chip designing. The transistors used in conventional chip designing are of FET type, which are highly efficient. The present biological approach also has a solution to design FET transistors at Nano scale.
For the construction of the transistor, initially the virus is engineered at the tip of the major protein coat and minor protein coat ends with a peptide, which has affinity towards the Gold. The middle portion of the shaft is engineered with peptide having affinity towards the desired semiconductor whose channel is to be formed. A Silicon wafer is patterned in desired form, so that it has exposed source and drain electrodes of gold with a gate electrode between them buried under a layer of insulation. Now the virus suspension would be poured onto the wafer. Due to the affinity of peptide, the gold seeking ends of the virus would find and bind to the electrodes forming a viral bridge between source and drain, across the gate. Then semiconductor precursor chemicals like zinc sulphide, which should form the channel, would be added. The semiconductor crystals are nucleated over the shaft wherever an engineered peptide is found. As the shaft is completely engineered, it result in formation of a semiconductor coat of 10 nm diameter on the virus turning the virus’s coat into a nano wire. For making a working transistor, the virus is annealed by a blast of heat, leaving only the nano wire bonded to the gold electrodes, yielding a nano transistor.
FORMATION OF LIQUID CRYSTAL FILMS
The virus possesses one peculiar property, which can be used in making viral films. This property of virus is called Self-assembly, which states that, “When virus are separated from water, they form multi layer films lining up shoulder to shoulder in liquid crystal”. These liquid crystal films are optically transparent and can be modified. The properties of the films like transparency mainly depend on the precursor chemicals. Films containing Zinc sulphide are crystal clear and used in many applications related to display and memory technology.
APPLICATIONS
The applications of this approach are very fascinating and give a motivating and breaking edge to all the future end technologies. The applications are-
High speed quantum flash memories
Flash memories can be fabricated by this approach in two ways. One way is through nano rings of magnetic materials forming high density magnetic memory, other is by viral films with magnetic material quantum dots on them giving rise to high density quantum flash memories.
The Nano rings formed of cobalt particles could be utilised for data storage. The cobalt nano particles, used to form the rings, have magnetic north and south poles and join up when brought together. Once formed into rings a collective magnetic state known as flux closure occurs. That is within the ring there is strong magnetic force, or flux, but there is a zero net magnetic effect on the outside. This results in cheaper and faster higher density magnetic data storage.
The viral films formed due to the property of self-assembly are designated with quantum dots of magnetic materials. Later the viral arrays are turned to Nano wires. The result is that, each quantum dot is joined with a Nano wire. When each Nano wire is systematically patterned and with each quantum dot acting as a storage for 1-bit memory, 30GB memory space can be accommodated with in 1 sq cm.
Quantum computer era
The Nano tools and circuits designed under this approach lead to strong foundation for quantum computer era. Quantum computers are the next generation computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Ordinary digital computers work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits. A qubit can be a 1 or a 0, or it can exist in a superposition that is simultaneously both 1 and 0 or somewhere in between. Qubits represent atoms that are working together to act as computer memory and a processor. As the quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.
Quantum computers perform certain calculations billions of times faster than any silicon-based computer They have the potential to store and process enormous amounts of information, immensely more than can be handled by today’s computers. They can factor very large numbers, perform cryptography, and drive science in large-scale modeling projects such as simulation of materials properties.
Quantum communication systems
Quantum communications detector system can be designed on a single 1 mm chip. It is the receiver for quantum communications system, which is a calibration tool for single photon sources, used to evaluate quantum communication protocols and the security of quantum cryptographic systems. It offers much higher efficiency, lower noise than conventional detectors and eliminates false positive results. The quantum communications detector system not only indicates the arrival of photon at destination but also determines how many photons arrive simultaneously and produces an electrical signal proportional to the absorbed energy from the photon. Quantum communications techniques offer greatly improved security of data communications by making crimes physically impossible.
Display technology
The present day’s image technologies used in computer screen are based upon pixels or tiny dots of light given out by materials used in them like LCD, LED, or phosphors on which our eyes cannot focus accurately. This results in many disorders relating to vision. These can be avoided by using displays made of crystallized nano wires of viral films. The quality and resolution are very high than any other present day technologies.
By using fluid containing the designed viral films with format as the ink, processes such as ink-jet printing or soft lithography can be used for manufacturing flexible display devices such as roll-up computer or TV screens at very low costs. Using this approach, displays having ability to serve as the human interface for information technology systems and new image-generation technology for thumbnail-sized high quality displays at a low cost can be achieved.
Nano robots and micro scopic computers in medicine
Micro scopic computers designed by this approach can be programmed to coordinate microscopic nano robots in human body, which patrol inside the body. As the disease causing parasites continue to mutate and produce new diseases, and human natural immune systems could not cope with the new mutated parasites, the nano robots under the directions of micro computers installed in side the body act as nano cell sentinel. When ever the microcomputer tracks a foreign invader with a different DNA track than that of the person’s, it commands the cell sentinels to destroy. Armed with the knowledge of the person’s DNA, the nano cell sentinels under the directions of micro scopic computers, can form immunity to not only of common cold but also any dangerous mutation that takes place in the body.
CONCLUSION
With all these research going on, the future circuit will self-assemble with the fidelity, for example to match the gate voltages on each of millions of transistors in a circuit. Researchers are shifting towards living cells, trying to engineer varieties that can construct and manipulate many types of material, as cell’s genes, unlike those of virus, can encode complex instructions that can be executed in a real time. With a microbe like that, Biology could not just be used to build circuits, but also to diagnose and repair them.
Nano technology achieved using biotechnology will change the very face of the world without consuming natural resources and leaving no pollutants. This future end technology is the gateway through which man can bring the innovations, which now felt to be fictious and hyperbolic, into existence. So Nano technology through Biotechnology is one choice for a safe and healthy world.
REFERENCE
Books:
“Selection of peptides with semi conductor binding specificity for directed nanocrystal assembly” by S.R. WHALEY.
Web sites:
http://web.ukonline.co.uk/webwise/spinneret/genes/dna.htm
www.bio.davidson.edu/courses/Molbio/MolStudents/spring2003/watson/phage.htm
http://cmbi.bjmu.edu.cn/www-learn/labmeeting/labmeeting_006.pdf
http://pubs.acs.org/cen/topstory/8202/8202notw1.html
http://www.vlsi.wpi.edu/webcourse/ch02/ch02.html
http://www.science.org.au/sats2003/belcher.htm
http://www.eetimes.com/at/news/OEG20040204S0014
http://www.zyvex.com/nano.htm
Magazines:
IEEE SPECTRUM
ELECTRONICS FOR YOU
(NANO TECHNOLOGY OF CHIP DESIGNING THROUGH BIOLOGY)
ABSTRACT
Nano technology is slowly creeping into electronic chips shrinking them to infinitesimal units. So far it was discovered that carbon in the form of carbon Nano tubes only could be used to achieve this technology. The design and fabrication of carbon nano tube based chips requires ultra high precision equipment working in dangerous and expensive industrial environments where one slightest flaw results in an adverse outcome. For such a vital situation biotechnology stands with promising solutions. It solves using simple biological molecules and microbes, which can themselves, assemble electronic materials and build Nano scale circuits. They can even self-correct while growing the materials. Biotechnological approach first involves selection of the Peptides that recognize a particular semiconductor material like gallium arsenide or indium phosphide and even one crystallographic face versus another with a very high specificity. These peptides are used to engineer a special virus called Bacteriophage. These engineered viruses are reproduced by a cyclic process called Directed evolution. By engineering the virus’s structure, it acts as template for making Nano transistors, Nano wires and Quantum dots. Here Nano transistors formed are of FET type. Quantum dots are basic functional blocks in circuits. By regular arrangements of these quantum dots, Nano wires are formed. Using these tools Nano circuits are made. Using a property of virus called Self-assembly, which states that when viruses are separated from water, they form multi layer films lining up shoulder-to-shoulder forming liquid crystal. This biological approach revolutionizes the field of chip design and fabrication as the chips can be designed at ordinary room temparatures in aqueous, nontoxic conditions than that of other nano wire fabrication methods. This revolution result in high-density quantum flash memories, foldable display screens using viral films, nano robots, micro computers in body for diagnosis and treatment and quantum era bringing super fast quantum computers and quantum communication systems at very low cost.
BIOTECHNOLOGY’S ASSURANCE FOR NANOTECHNOLOGY
The explorations and necessities in science and technology are gradually wiping off the barriers and amalgamating different sectors of science, which appear to be un-correlated. One such ground breaking and innovative exploration is chip designing using biotechnology. If this becomes a success there would be major break through in the field of nano technology.
With the speed and accuracy of the circuits increasing, the complexity of the circuits increases with crumbling the size of chips. This increases the need for ultra high precision and sophisticated equipment, which is very expensive and laborious. Even though the equipment is used, the efficiency of the systems is uncertain when compared with the investment over it. Hence there is a need to look for an alternative approach, which promotes Nano technology. Biotechnology has some promising solutions for this.
INTRODUCTION
Living creatures produce the most complex molecular structures known to science. Crafted over ages by natural selection, these 3-dimensional arrangements of atoms manifest a precision and fidelity, not to mention minuteness far beyond the capabilities of current technologies. Under the direction of Genes encoded in DNA, cells construct proteins that put together the fine structures necessary for life. By altering the genetic codes of these cells, they can be used for various applications. This briefly illustrates biology’s promise in furthering Nano technology, the manufacture of circuits and devices only billionths of a meter in size.
Genetic engineers are evolving nano technology tools by picking the best molecules among the variants found in large populations over several generations. It was discovered that a special type of virus could be used for achieving this. By implementing mainly two processes on the virus, they can be used in developing tools and circuits at nano scale. The processes are 1) Directed evolution 2) Self-assembly
ROLE OF PEPTIDES IN THIS APPROACH
In this approach peptides play an important role in the fabrication of chips from viruses. Peptides are small proteins made of short chain of amino acids. There are a billion different peptides from which some peptides possess special property of affinity towards certain materials of interest. Peptides that have a high affinity for some particular semiconductor material are selected. Here initially, peptides having affinity towards materials like zinc sulphide and gold are considered and used to genetically engineer the virus.
DESCRIPTION AND ENGINEERING OF VIRUS
The virus used here is a special type of virus called bacteriophage. It gets its name form its property of afflicting bacteria, which is used here to amplify its population. It resembles a very fat, rocket shaped pencil, with a major protein coat of one type of peptide along its painted shaft and another type as minor protein coat on the five tentacle like structures at the “eraser” end, -each controlled by its own genes. It has a length of 880 nanometers and a width of 6.6 nanometers. Inside the major protein coat there is a single strain of DNA, which controls the nature of virus. The DNA consists of three genes and the gene-3 is the peptide that is present on the minor protein coat. So we have a billion different viruses that are all genetically similar, except that they differ from each other based on a small peptide on each end. The reason for selecting bacteriophage is not only due to their physical structure but also due to their property of multiplying and modifying rapidly in the lab. When a solution contains all the one billion possible viruses with random peptide insert at the ends of them, the solution is termed as phage library. Usually water is used as the solution to act as a medium for the viruses.
The peptide coats can be independently genetically engineered with different peptides of interest, which ultimately changes the physical and adhering properties of virus towards certain materials, which is a vital property used in the chip-designing process.
DIRECTED EVOLUTION
Directed evolution is used to describe the techniques for the iterative production, evaluation and selection of variants of a biological sequence. It mimics the process of natural selection.
Directed evolution basically involves testing the phage library, selecting the specific engineered viruses and amplifying the population of engineered viruses retaining the original properties, which are desired.
TARGETING PURE SEMICONDUCTOR CRYSTALS
First the phage library, of the peptide, which has affinity towards the substance of interest, is selected. Then the phage library is exposed to the substance of interest, such as a wafer of Zinc sulphide. Those viruses whose minor protein coats have a natural affinity for the material, with their eraser ends stick to the wafer. Then the chip is washed with dilute acid or a chemical bath. Phages that do not bind well to the wafer are washed away and the remaining phages that are sticking to the wafer are removed by a process called PH-Elution. The removed phage population is amplified by infecting with bacteria. Again the produced viruses are poured back onto the chip and subjected to a more stringent i.e. more acidic or basic wash. After several cycles of such washes in which the wash becomes stronger and stronger, the viruses sticking to the wafer are those whose peptide coats have a tight fit to the semiconductor crystal. Similarly several rounds of directed evolution are performed to pick out viruses that bind to crystals of Gallium arsenide and Indium phosphide, the two semi conductors employed in high frequency communications chips. The produced viruses act as versatile templates in the fabrication of chips.
FOUNDATION FOR NANO CIRCUITS, NANO WIRES & NANO RINGS
The viruses can, not only stick to the semiconductor materials, but also can grow nano scale semiconductor crystals over them. Consider first the viruses, which are engineered on some locations on the major protein coat with the peptide, which has affinity towards the semiconductor of interest. Now these, when mixed with precursor chemicals containing the semiconductor’s elemental ingredients, the virus’s engineered peptides act as a template, hustling atoms into the same crystal structure to which the peptides were engineered to bind. The result is an organic and inorganic hybrid with viral particles of 7nm wide, 800nmlong, and 2 to 3nm semiconductor crystals wherever an engineered peptide is found. If the entire protein coat is engineered with the peptide, semiconductor crystals are formed all over the shaft of the virus. These crystals are called Quantum dots and these are the fundamental components of nano scale circuits. The core of the quantum dot is usually contained with zinc sulphide, which has a higher electronic band gap. This improves the confinement of the electron-hole pairs and helps in conduction. After the formation of quantum dots all over the shaft, the virus is subjected to a process called annealing which removes the virus’s organic framework. High-temperature annealing removes the organic materials of virus, leaving the inorganic quantum dots to collapse into the space formerly occupied by the virus to form a single crystal, which results in solid Nano wire. Varying the length of the virus before the start of the process can produce Nano wires of variable lengths.
By suitable peptide engineering, Viruses can be selected to form quantum dots of the semiconductors Zinc sulphide and cadmium sulphide and the magnetic materials cobalt-platinum and iron-platinum from which corresponding Nano wires are made.
For making a Nano ring initially two different genetic modifications are done to the bacteriophage, one at the eraser end and one at the pencil tip end. Then the two ends are linked together with another molecule to form a Nano ring. Then similar procedure as that of a nano wire using precursor chemicals of cobalt. Finally resulting tiny magnetic rings of cobalt particles smaller than 100 nanometers. They find applications in storing magnetic information at a high speed only at room temperature.
CONSTRUCTION OF NANO TRANSISTOR
Transistor fabrication is another important part in the process of chip designing. The transistors used in conventional chip designing are of FET type, which are highly efficient. The present biological approach also has a solution to design FET transistors at Nano scale.
For the construction of the transistor, initially the virus is engineered at the tip of the major protein coat and minor protein coat ends with a peptide, which has affinity towards the Gold. The middle portion of the shaft is engineered with peptide having affinity towards the desired semiconductor whose channel is to be formed. A Silicon wafer is patterned in desired form, so that it has exposed source and drain electrodes of gold with a gate electrode between them buried under a layer of insulation. Now the virus suspension would be poured onto the wafer. Due to the affinity of peptide, the gold seeking ends of the virus would find and bind to the electrodes forming a viral bridge between source and drain, across the gate. Then semiconductor precursor chemicals like zinc sulphide, which should form the channel, would be added. The semiconductor crystals are nucleated over the shaft wherever an engineered peptide is found. As the shaft is completely engineered, it result in formation of a semiconductor coat of 10 nm diameter on the virus turning the virus’s coat into a nano wire. For making a working transistor, the virus is annealed by a blast of heat, leaving only the nano wire bonded to the gold electrodes, yielding a nano transistor.
FORMATION OF LIQUID CRYSTAL FILMS
The virus possesses one peculiar property, which can be used in making viral films. This property of virus is called Self-assembly, which states that, “When virus are separated from water, they form multi layer films lining up shoulder to shoulder in liquid crystal”. These liquid crystal films are optically transparent and can be modified. The properties of the films like transparency mainly depend on the precursor chemicals. Films containing Zinc sulphide are crystal clear and used in many applications related to display and memory technology.
APPLICATIONS
The applications of this approach are very fascinating and give a motivating and breaking edge to all the future end technologies. The applications are-
High speed quantum flash memories
Flash memories can be fabricated by this approach in two ways. One way is through nano rings of magnetic materials forming high density magnetic memory, other is by viral films with magnetic material quantum dots on them giving rise to high density quantum flash memories.
The Nano rings formed of cobalt particles could be utilised for data storage. The cobalt nano particles, used to form the rings, have magnetic north and south poles and join up when brought together. Once formed into rings a collective magnetic state known as flux closure occurs. That is within the ring there is strong magnetic force, or flux, but there is a zero net magnetic effect on the outside. This results in cheaper and faster higher density magnetic data storage.
The viral films formed due to the property of self-assembly are designated with quantum dots of magnetic materials. Later the viral arrays are turned to Nano wires. The result is that, each quantum dot is joined with a Nano wire. When each Nano wire is systematically patterned and with each quantum dot acting as a storage for 1-bit memory, 30GB memory space can be accommodated with in 1 sq cm.
Quantum computer era
The Nano tools and circuits designed under this approach lead to strong foundation for quantum computer era. Quantum computers are the next generation computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Ordinary digital computers work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits. A qubit can be a 1 or a 0, or it can exist in a superposition that is simultaneously both 1 and 0 or somewhere in between. Qubits represent atoms that are working together to act as computer memory and a processor. As the quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.
Quantum computers perform certain calculations billions of times faster than any silicon-based computer They have the potential to store and process enormous amounts of information, immensely more than can be handled by today’s computers. They can factor very large numbers, perform cryptography, and drive science in large-scale modeling projects such as simulation of materials properties.
Quantum communication systems
Quantum communications detector system can be designed on a single 1 mm chip. It is the receiver for quantum communications system, which is a calibration tool for single photon sources, used to evaluate quantum communication protocols and the security of quantum cryptographic systems. It offers much higher efficiency, lower noise than conventional detectors and eliminates false positive results. The quantum communications detector system not only indicates the arrival of photon at destination but also determines how many photons arrive simultaneously and produces an electrical signal proportional to the absorbed energy from the photon. Quantum communications techniques offer greatly improved security of data communications by making crimes physically impossible.
Display technology
The present day’s image technologies used in computer screen are based upon pixels or tiny dots of light given out by materials used in them like LCD, LED, or phosphors on which our eyes cannot focus accurately. This results in many disorders relating to vision. These can be avoided by using displays made of crystallized nano wires of viral films. The quality and resolution are very high than any other present day technologies.
By using fluid containing the designed viral films with format as the ink, processes such as ink-jet printing or soft lithography can be used for manufacturing flexible display devices such as roll-up computer or TV screens at very low costs. Using this approach, displays having ability to serve as the human interface for information technology systems and new image-generation technology for thumbnail-sized high quality displays at a low cost can be achieved.
Nano robots and micro scopic computers in medicine
Micro scopic computers designed by this approach can be programmed to coordinate microscopic nano robots in human body, which patrol inside the body. As the disease causing parasites continue to mutate and produce new diseases, and human natural immune systems could not cope with the new mutated parasites, the nano robots under the directions of micro computers installed in side the body act as nano cell sentinel. When ever the microcomputer tracks a foreign invader with a different DNA track than that of the person’s, it commands the cell sentinels to destroy. Armed with the knowledge of the person’s DNA, the nano cell sentinels under the directions of micro scopic computers, can form immunity to not only of common cold but also any dangerous mutation that takes place in the body.
CONCLUSION
With all these research going on, the future circuit will self-assemble with the fidelity, for example to match the gate voltages on each of millions of transistors in a circuit. Researchers are shifting towards living cells, trying to engineer varieties that can construct and manipulate many types of material, as cell’s genes, unlike those of virus, can encode complex instructions that can be executed in a real time. With a microbe like that, Biology could not just be used to build circuits, but also to diagnose and repair them.
Nano technology achieved using biotechnology will change the very face of the world without consuming natural resources and leaving no pollutants. This future end technology is the gateway through which man can bring the innovations, which now felt to be fictious and hyperbolic, into existence. So Nano technology through Biotechnology is one choice for a safe and healthy world.
REFERENCE
Books:
“Selection of peptides with semi conductor binding specificity for directed nanocrystal assembly” by S.R. WHALEY.
Web sites:
http://web.ukonline.co.uk/webwise/spinneret/genes/dna.htm
www.bio.davidson.edu/courses/Molbio/MolStudents/spring2003/watson/phage.htm
http://cmbi.bjmu.edu.cn/www-learn/labmeeting/labmeeting_006.pdf
http://pubs.acs.org/cen/topstory/8202/8202notw1.html
http://www.vlsi.wpi.edu/webcourse/ch02/ch02.html
http://www.science.org.au/sats2003/belcher.htm
http://www.eetimes.com/at/news/OEG20040204S0014
http://www.zyvex.com/nano.htm
Magazines:
IEEE SPECTRUM
ELECTRONICS FOR YOU
A EYE FOR THE BLIND
A EYE FOR THE BLIND
Abstract
“The seeds of knowledge may be planted in solitude but must be cultivated in public”. Yes, we should use our knowledge and education for the needs of the people. We like to do something for thousands of our brethren at the loss of their eye sight. It would be a boon in the life of a blind, if there is a scientific invention that would be an apt substitute for his eye sight. This would surely cultivate self confidence and courage for a blind to face the world without the help of others. “There is no study that is not capable of delighting as after a little application to it”. Thus we are greatly delighted in applying our knowledge in creating an apt substitute for the visually handicapped.
Index Terms: RADAR, Microwave radar, Detectors, Doppler shift,
1.0 INTRODUCTION
Radar -- an acronym for RAdio Detection And Ranging -- was invented during World War II and was used to locate enemy planes and ships. The radar device emits a microwave signal and detects the arrival of the reflected signal. The microwave is reflected by objects such as aircraft, and the elapsed time between emission and return is a function of the distance of the object from the radar device. In addition, the speed and direction of a moving object can be determined by analyzing the shift in the frequency of the microwave signal (Doppler effect).
Electromagnetic waves radiated by radar, as well as sound waves, obey the Doppler principal, although electromagnetic waves travel at the speed of light and audio waves travel at the speed of sound. The Doppler effect is a frequency shift that results from relative motion between a frequency source and a listener. If both source and listener are not moving with respect to each other (although both may be moving at the same speed in the same direction), no Doppler shift will take place. If the source and listener are moving closer to each other, the listener will perceive a higher frequency -- the faster the source or receiver is approaching the higher the Doppler shift. If the source and listener are getting farther apart, the listener will perceive a lower frequency -- the faster the source or receiver is moving away the lower the frequency. The Doppler shift is directly proportional to speed between source and listener, frequency of the source, and the speed the wave travels (speed of light for electromagnetic waves).
2.0 HARDWARE USED
Microphone with headset.
Analog to digital and digital to analog converter.
Microwave radar.
Microcontroller.
2.1 Microphone with headset
A sophisticated microphone is used to receive voice commands from the user. The voice information is given to the user through the headset.
2.2 Analog-to-Digital & Digital-to-Analog converter
It’s used to convert the analog voice signals and store it in a digital form. It also converts the digital data stored to analog voice signals and given to the user through the headset.
2.3 Microcontroller
This device is used to recognize the digital voice signals and triggers the actions for the radar. It also sends signals to the user based on the values determined by the microwave radar.
2.4 Microwave Radar
This device is used to read any object, their speed and direction around it. The radar uses microwave signals that are sent and reflected back through which the objects speed and direction are determined.
3.0 HOW RADAR DETECTORS WORK
Think of a radar signal as a beam of light from a flashlight. When you shine a flashlight at an object, your eyes perceive the light reflected from the object. Now imagine yourself as the object being illuminated. You can see the light from the flashlight from a much farther distance than the person with the flashlight could ever hope to see you. That's because the beam loses energy over distance. So while the beam has enough energy to reach you, the reflected light doesn't have enough energy to travel all the way back to where it started.
Radar guns "see" a vehicle by transmitting a microwave pulse. Then they make use of the Doppler Effect: the frequency of the transmitted pulse is compared to the frequency of the reflection, and speed is calculated by using the difference between them.
Fig 1: Speed is calculated when a pulse is reflected to the RADAR transmitter.
That's the idea behind radar detectors. They look for radar "beams" and find them before they can return a strong enough reflection to "illuminate" you. Detectors use something called superheterodyne reception to accomplish this. Radar detectors are essentially microwave radio receivers that make noise or flash lights when they sense an incoming signal on specific frequencies. Superheterodyne reception allows detection of radar around curves or over hills, and it extends detection range straight ahead.
3.1 Moving-mode Radar Doppler
Moving-mode radar is slightly more complicated. The target echo frequency is shifted by the relative speed between the target and radar. Target relative speed (to radar) is the sum of target and user speed for opposite direction targets. For same-lane (direction) targets relative target speed is the difference between target and user speed.
Moving-mode radar depends on two measurements to derive target speed:
(1) GROUND ECHO -- measures user speed
(2) TARGET ECHO -- measures relative (to radar) target speed.
Ground echoes are Doppler shifted by the user velocity. The radar tracks the ground echo to determine user (radar) velocity. The radar uses user velocity and relative (to radar) target speed (target echo) to calculate actual target speed.
Moving Mode Spectrum
Opposite Direction Target
Target Relative Speed to Radar is Vrelative = Vp + Vt
Vt = Vrelative - Vp
Moving Mode Spectrum
Same Direction (lane) Target
Front Antenna
Target Relative Speed to Radar is Vrelative = Vp - Vt
Vt = Vp - Vrelative
Note that on-coming (opposite direction) targets have a negative speed (compared to same-lane targets). This type of radar can (if built-in) distinguish between same-lane targets and opposite direction targets.
BEAM SPREAD
laser or microwave
Beam Spread = 2 R tan ( Beamwidth / 2 )
Beamwidth = 3.5 mR = 0.201°
1.75 feet
0.53 meters
500 feet
152.4 meters
Beamwidth: 3.5 mR (0.201°)
Range: 500 feet (152.4 meters)
Beam Spread: 1.75 feet (0.53 meters)
4.0 BLOCK DIAGRAM
5.0 WORKING
The microphone receives the voice command from the user and gives it to the converter, the converter converts the Analog voice signal into digital. The microcontroller receives this digital signal and matches it with the stored commands, if any match occurs, say for example “cross road” is stored in the microcontroller. If the user speaks out “cross road”, there is the match between these digital voice signals, then the microcontroller triggers the microwave radar to detect the object’s speed and their direction. The information that is inferred from the radar is taken and using the simple logic
If distance>50mts and speed<3km
Play sound “Can Move”
Else
Play sound “Wait”
The sound that is stored in digital is converted to analog using the converter and it is given to the user through the headset.
6.0 CONCLUSION
We have created a model which would be a boon in the life of a blind. This scientific invention would surely cultivate self confidence and courage for a blind to face the world without the help of others.
REFERENCES
[1]www.policetrafficradar.com
[2]www.ieec.com
[3]www.crutchfielsadvisor.com
[4]www.bushnell.com
Abstract
“The seeds of knowledge may be planted in solitude but must be cultivated in public”. Yes, we should use our knowledge and education for the needs of the people. We like to do something for thousands of our brethren at the loss of their eye sight. It would be a boon in the life of a blind, if there is a scientific invention that would be an apt substitute for his eye sight. This would surely cultivate self confidence and courage for a blind to face the world without the help of others. “There is no study that is not capable of delighting as after a little application to it”. Thus we are greatly delighted in applying our knowledge in creating an apt substitute for the visually handicapped.
Index Terms: RADAR, Microwave radar, Detectors, Doppler shift,
1.0 INTRODUCTION
Radar -- an acronym for RAdio Detection And Ranging -- was invented during World War II and was used to locate enemy planes and ships. The radar device emits a microwave signal and detects the arrival of the reflected signal. The microwave is reflected by objects such as aircraft, and the elapsed time between emission and return is a function of the distance of the object from the radar device. In addition, the speed and direction of a moving object can be determined by analyzing the shift in the frequency of the microwave signal (Doppler effect).
Electromagnetic waves radiated by radar, as well as sound waves, obey the Doppler principal, although electromagnetic waves travel at the speed of light and audio waves travel at the speed of sound. The Doppler effect is a frequency shift that results from relative motion between a frequency source and a listener. If both source and listener are not moving with respect to each other (although both may be moving at the same speed in the same direction), no Doppler shift will take place. If the source and listener are moving closer to each other, the listener will perceive a higher frequency -- the faster the source or receiver is approaching the higher the Doppler shift. If the source and listener are getting farther apart, the listener will perceive a lower frequency -- the faster the source or receiver is moving away the lower the frequency. The Doppler shift is directly proportional to speed between source and listener, frequency of the source, and the speed the wave travels (speed of light for electromagnetic waves).
2.0 HARDWARE USED
Microphone with headset.
Analog to digital and digital to analog converter.
Microwave radar.
Microcontroller.
2.1 Microphone with headset
A sophisticated microphone is used to receive voice commands from the user. The voice information is given to the user through the headset.
2.2 Analog-to-Digital & Digital-to-Analog converter
It’s used to convert the analog voice signals and store it in a digital form. It also converts the digital data stored to analog voice signals and given to the user through the headset.
2.3 Microcontroller
This device is used to recognize the digital voice signals and triggers the actions for the radar. It also sends signals to the user based on the values determined by the microwave radar.
2.4 Microwave Radar
This device is used to read any object, their speed and direction around it. The radar uses microwave signals that are sent and reflected back through which the objects speed and direction are determined.
3.0 HOW RADAR DETECTORS WORK
Think of a radar signal as a beam of light from a flashlight. When you shine a flashlight at an object, your eyes perceive the light reflected from the object. Now imagine yourself as the object being illuminated. You can see the light from the flashlight from a much farther distance than the person with the flashlight could ever hope to see you. That's because the beam loses energy over distance. So while the beam has enough energy to reach you, the reflected light doesn't have enough energy to travel all the way back to where it started.
Radar guns "see" a vehicle by transmitting a microwave pulse. Then they make use of the Doppler Effect: the frequency of the transmitted pulse is compared to the frequency of the reflection, and speed is calculated by using the difference between them.
Fig 1: Speed is calculated when a pulse is reflected to the RADAR transmitter.
That's the idea behind radar detectors. They look for radar "beams" and find them before they can return a strong enough reflection to "illuminate" you. Detectors use something called superheterodyne reception to accomplish this. Radar detectors are essentially microwave radio receivers that make noise or flash lights when they sense an incoming signal on specific frequencies. Superheterodyne reception allows detection of radar around curves or over hills, and it extends detection range straight ahead.
3.1 Moving-mode Radar Doppler
Moving-mode radar is slightly more complicated. The target echo frequency is shifted by the relative speed between the target and radar. Target relative speed (to radar) is the sum of target and user speed for opposite direction targets. For same-lane (direction) targets relative target speed is the difference between target and user speed.
Moving-mode radar depends on two measurements to derive target speed:
(1) GROUND ECHO -- measures user speed
(2) TARGET ECHO -- measures relative (to radar) target speed.
Ground echoes are Doppler shifted by the user velocity. The radar tracks the ground echo to determine user (radar) velocity. The radar uses user velocity and relative (to radar) target speed (target echo) to calculate actual target speed.
Moving Mode Spectrum
Opposite Direction Target
Target Relative Speed to Radar is Vrelative = Vp + Vt
Vt = Vrelative - Vp
Moving Mode Spectrum
Same Direction (lane) Target
Front Antenna
Target Relative Speed to Radar is Vrelative = Vp - Vt
Vt = Vp - Vrelative
Note that on-coming (opposite direction) targets have a negative speed (compared to same-lane targets). This type of radar can (if built-in) distinguish between same-lane targets and opposite direction targets.
BEAM SPREAD
laser or microwave
Beam Spread = 2 R tan ( Beamwidth / 2 )
Beamwidth = 3.5 mR = 0.201°
1.75 feet
0.53 meters
500 feet
152.4 meters
Beamwidth: 3.5 mR (0.201°)
Range: 500 feet (152.4 meters)
Beam Spread: 1.75 feet (0.53 meters)
4.0 BLOCK DIAGRAM
5.0 WORKING
The microphone receives the voice command from the user and gives it to the converter, the converter converts the Analog voice signal into digital. The microcontroller receives this digital signal and matches it with the stored commands, if any match occurs, say for example “cross road” is stored in the microcontroller. If the user speaks out “cross road”, there is the match between these digital voice signals, then the microcontroller triggers the microwave radar to detect the object’s speed and their direction. The information that is inferred from the radar is taken and using the simple logic
If distance>50mts and speed<3km
Play sound “Can Move”
Else
Play sound “Wait”
The sound that is stored in digital is converted to analog using the converter and it is given to the user through the headset.
6.0 CONCLUSION
We have created a model which would be a boon in the life of a blind. This scientific invention would surely cultivate self confidence and courage for a blind to face the world without the help of others.
REFERENCES
[1]www.policetrafficradar.com
[2]www.ieec.com
[3]www.crutchfielsadvisor.com
[4]www.bushnell.com
Subscribe to:
Posts (Atom)