More ciphers

Here are a couple more ciphers, these are actually still in use today.

The RSA cipher, one of the first few public-key ciphers, achieves security via the factorisation problem. You’ll have to provide two large primes to generate the key-pair first. It uses the Sieve of Eratosthenes to find a random prime for the public key exponent, it’s quite time-consuming.

Back to symmetric block ciphers, we have the AES block cipher which uses 128-bit keys (for this implementation). Unlike DES, it’s not brute-forceable and lacks NSA backdoors :)

Advertisements

Leave a comment

Filed under cryptography

Cryptography in python

I’ve been taking cryptography this semester. Since it’s a pretty algorithm-heavy module, I’ve implemented some of the ciphers in python to help me understand the material. I’ve also been working through the book “Modern Cryptanalysis” by Christopher Swenson which offers a good alternative explanation for my lecture content which is quite formal. The official textbook for the course is “Cryptography and Network Security Principles and Practices” by William Stallings.

The first one, and quite messy, is a program to solve for the multiplicative inverse of a number within a finite field using the Extended Euclid’s Algorithm. Hm, just realised that the Wikipedia entry already has a nice code sample :)

Then there’s the classic Caesar’s substitution cipher, which shifts every letter ahead by 3. You can run the unit tests simply by calling the program, and add “-v” as an argument to see more verbose output.

Next is the Hill cipher. This program depends on the earlier caesar’s and euclid’s snippets to run. There’s also a problem with sentences containing spaces, so I couldn’t solve the ciphertext given in Stalling’s book :(

The original program which motivated me to start implementing ciphers was the EASY1 code sample from Modern Cryptanalysis. About three quarters of the program is copied from the textbook, but Christopher left out a couple of decrypting functions. The EASY1 cipher is also the target of many of the “attack” programs later in the book which I can’t wait to try :)

Last but not least, the DES cipher which is still in use today in the form of Triple-DES. Like EASY1, it is a block cipher but uses the Feistel structure, allowing the same hardware to be reused for encryption and decryption. This program had many isolated functions making it a good exercise for unit testing as well, so it was a really fun exercise :)

Leave a comment

Filed under cryptography

Camera Systems

Any device which captures an image (e.g. the human eye, cameras) consists of 3 basic components:
Aperture – Controls the amount of light rays entering the device
Optical system(lens) – Focuses incoming light rays to converge on the screen
Screen – Captures the image using photoreceptors

When determining how the image will be formed, two questions to ask are: where does a point on the scene appear in the image, and how bright does that point appear?

Thin lens system
To simplify the study of image formation, we can use a thin lens system which assumes that lens thickness can be ignored (no refraction).

thinlens

The basic assumption of the thin lens system is that all rays entering the lens parallel to the optical axis will intersect the focus on the opposite side. Likewise, all rays entering the lens from the focus will emerge parallel on the opposite side. Putting these two properties together, we can tell that all rays entering the lens center will pass through unchanged. These assumptions give us three principal rays which can be easily visualised.

Fundemental equation of the thin lens system
1/a + 1/b = 1/f
a – distance of scene object to lens
b – distance of lens to focusing point

Field of View
The maximum angle of the scene observable by the camera, factors are the focal length of the lens and the size of the imaging plane.

Depth of Field
The range of depth such that objects can remain in acceptable focus simultaneously, it’s factors are the focal length of the lens and the aperture size.

fov

Leave a comment

Filed under computer vision

Frequent Pattern Analysis

Frequent patterns, which are sequences or item sets that occur frequently in a data set, form the basis for data mining as they can be used to find associations such as products which sell well together and so on.

Transactional database
Starting out with a set of items, I = {apple, orange, soap, candy, scotch tape}, each transaction would then refer to a subset of I. The collection of all these transactions is called a transactional database, or TDB.

T1 – {apple, orange}
T2 – {orange, soap}
T3 – {orange, soap, candy, scotch tape}

In order to find patterns, we first have to define a support level, which is the number of occurrences of each frequent itemset in the TDB. With a support level of 2, we would have the total following frequent itemsets:

{orange}, {soap}, {orange, soap}

We can also say that the relative support of {orange} is around 67%, as it occurs in ~67% of all transactions.

In general we want to reduce the number of patterns due to storage space, furthermore enumerating all the frequent itemsets in a TDB is often computationally expensive.

Equivalence classes
An equivalence class is a set of itemsets which always occur in the same set of transactions.

Types of patterns
Maximal pattern – For a given itemset x in a set A of itemsets, x is maximal in A if and only if there is no other itemset in A that is a superset of x.
Minimal pattern – For a given itemset x in a set A of itemsets, x is minimal in A if and only if there is no other itemset in A that is a subset of x.
Closed pattern – The maximal pattern of an equivalence class
Generator – The minimal pattern of an equivalence class

Down closure property
If X is a frequent itemset in a set A of itemsets, then all subsets of X are also frequent itemsets of A.
If X is an infrequent itemset in a set A of itemsets, then all supersets of X are also infrequent itemsets of A.

Convexity
For any pattern space (any set of patterns) A, if every pattern which is a superset of the minimal patterns in A and a subset of the maximal patterns in A is present in A, then A is convex.

Representations of datasets

We can reduce the size of a dataset (compression) by using various representations, which refer to selecting representative itemsets to create a new dataset. Representations may have two properties:
Lossless

  • Every pattern in the dataset and it’s support can be derived from the representation without access to the original dataset.

Concise

  • The size of the representation is smaller than the original dataset.

Selecting the maximal patterns in a dataset is concise but lossy.
Selecting the closed patterns in a dataset is concise and lossless.

Leave a comment

Filed under data mining

Discretisation

Continuous variables can be discretised into interval ranges to reduce the number of possible values. One example of a discretisation technique is Binning, which refers to dividing the data set into lesser chunks (called bins) and then changing each value into a single representative value, usually the mean, for each bin.

Equal-frequency binning
Each bin contains an equal number of items. E.g.

[4.5, 6.5, 8.5, 10.5, 11.24, 15.56]

split into 2 bins with 3 items each (number of bins is arbitrary)

Bin 1 – [4.5, 6.5, 8.5]
Bin 2 – [10.5, 11.24, 15.56]

and perform smoothing

Bin 1 – [7, 7, 7] (mean 6.5 rounded to 7)
Bin 2 – [13, 13, 13] (mean 12.433 rounded to 13)

Equal-width binning
Each bin represents a single interval, and each interval has the same width.

[4.5, 6.5, 8.5, 10.5, 11.24, 15.56, 17.18, 19.92, 23.41, 24.56]

select an interval by finding the width, in this case 20.06 which can be rounded to 21 giving you an equal-width interval of 7 for 3 bins.

Bin 1 – [4.5, 6.5, 8.5, 10.5] (4 <= x < 11)
Bin 2 – [11.24, 15.56, 17.18] (11 <= x < 18)
Bin 3 – [19.92, 23.41, 24.56] (18 <= x < 25)

Then smooth it:

Bin 1 – [8, 8, 8, 8]
Bin 2 – [15, 15, 15]
Bin 3 – [22, 22, 22]

Another way of selecting the smoothing value is just selecting distinct ordered symbols. For instance, 4, 11 and 18 can be used in the example above to smooth the bins.

Leave a comment

Filed under data mining

Basic definitions in data mining

Data mining is the practice of finding patterns in raw data.

Data object – A single entity/record/tuple

Attribute – A property or characteristic of a data object

Types of attributes
Quoted definitions taken from here.

  1. Nominal – A set of arbitrary symbols which are distinct. Assuming you don’t order them alphabetically, they cannot be compared. Some examples are colour, name and address.
  2. Ordinal- Symbols with an intrinsic order, such as {bronze, silver, gold} for medal colours, or maybe {shotgun, rocket launcher, BFG9000} when ranking doom weapons. However, the difference between the individual values hold no meaning, a BFG is better than a rocket launcher and a rocket launcher is better than a shotgun, but by how much?
  3. Interval – Symbols where “the difference between two values is meaningful”. Taken from the site linked above, an example of interval attributes would be the celsius scale, where 0 does not mean “absence of temperature”. I guess that would be absolute zero instead.
  4. Ratio – Same as interval attributes, but with “a clear definition of 0.0 – When the variable equals 0.0, there is none of that variable.” If you picked up the Last Laugh and looked at it’s stats, the interval attributes would be dmg, dmg/sec, speed, +strength, +stamina and durability.

picture-2

Ways to represent sets of data

  • Record – represent data in a table format, think of your typical spreadsheet or database table. Each record is a row, with it’s own set of attributes as the columns. Two special types of record sets are document and transaction:

Document

For any given set of words (a document), count the number of times each word appears and put it in a separate attribute.

Transaction
Each record is a set of items. The typical example would be a shop where customers make purchases. Each purchase would be a single transaction, and the individual items bought during the transaction would make up the set. For example:

Transaction 1 – {Spellbooks, potions, luminiferous ether}

  • Graph – represent data in a graph, typically chemical molecules and the friendster logofiendster
  • Ordered – represent data in a sequence. Using the shop example which shows the order of transactions where each set of curly brackets represent a single transaction, this would be:

{scroll, runes, frogbat}{frogbat}{spellbooks, luminiferous ether}{frogbat}{grue-b-gone}

Data preprocessing – reducing and selecting the data we want to process.

Aggregation – combining of attributes into a single attribute

Sampling – selecting a representative (same properties as the original) subset of the data.

Dimensionality Reduction – reducing the number of variables so that it can be plotted on a graph (among other benefits), done by transforming the variables or choosing a subset.

Some more info on data mining.

Leave a comment

Filed under data mining

New semester begins

After more than a year of absence, some recent comments reminded me about this blog. I guess this would be a good time to start posting again. I still have a year left to go in NTU, here’s to a fruitful year ahead.

Leave a comment

Filed under life