In this post with continuation here, I've built a small Python library to work with Propositional Logic expressions. There are new functionalities, welcome post about them from Jupyter Notebook here, code on github updated. See you next year!!!

## Saturday, 31 December 2016

## Friday, 30 December 2016

### Tautology Check in Jupyter Notebook

Another Jupyter Notebook Posts - check if a given propositional calculus formula is a tautology. code also on github.

## Wednesday, 28 December 2016

### Automatic Differentiation Update

I added new functions to my Python automatic differentiation:

power to dual number, natural logarithm, hiperbolic functions, exponential; github and Jupyter Notebook updated.

power to dual number, natural logarithm, hiperbolic functions, exponential; github and Jupyter Notebook updated.

## Tuesday, 27 December 2016

## Monday, 19 December 2016

### Python Data Structures Has New Members

Added new memebers to my python data structures family: Stack, Heap (min heap), Binary Tree, Binary Search Tree, Parse Tree (this is in fact parse tree building and evaluation algorithm).

Not much more work to have all the basic data structures implemented, I think just need graphs; hash tables are done well in python standart library, so don't need to repeat the job. Code here.

Not much more work to have all the basic data structures implemented, I think just need graphs; hash tables are done well in python standart library, so don't need to repeat the job. Code here.

### Project Euler Helper

For Eulerians, I've found in the internet file with some Project Euler helping functions. I add a few my own and this is a file. Good Luck!

## Sunday, 18 December 2016

### Python Automatic Differentiation

I've played with interesting concept of automatic differentiation and this is Jupiter notebook guest post about it. Code also here.

## Wednesday, 14 December 2016

### Python Immutable Stack

While working on something bigger, I added new data structure to python family, code on github.

Interface is based on my cons list, due to lack of TCO support in python this is not very useful - learning purposes or small datasets.

Interface is based on my cons list, due to lack of TCO support in python this is not very useful - learning purposes or small datasets.

## Sunday, 11 December 2016

### Python Immutable Trees

When I was building immutable python lists interface, I realized, than in the same way an immutable binary tree (or ternary and other) can be created - using python Abstract Base Class. Code looks like:

I would call it purely functional, can it be used in practice? Maybe to do some functional stuff in python. This code plus some basic tree functions freely available on github.

PS I, also added lots of methods to cons lists, code here.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | class Nil_tree: """class Nil_tree, the empty tree""" def is_empty(self): return True def left(self): return Exception("Empty") def right(self): return Exception("Empty") def __str__(self): return "()" class Binary_tree: """Class Binary_tree, the non empty tree: ( val, left, right)""" def __init__(self, _item, _left, _right): self.item = _item self.left = _left self.right = _right def is_empty(self): return False class Tree(metaclass=ABCMeta): @abstractmethod def is_empty(): pass def item(): pass def left(): pass def right(): pass List.register(Nil_tree); List.register(Binary_tree) |

I would call it purely functional, can it be used in practice? Maybe to do some functional stuff in python. This code plus some basic tree functions freely available on github.

PS I, also added lots of methods to cons lists, code here.

## Wednesday, 7 December 2016

### Immutable Lists in Python

Surprisingly, when I open github and hit the search for "immutable lists python", it gave me only 2 not very interesting results. I'm working on materials in scheme (in the goal of write an interpreter and deep a bit into the nlp), but I want do it in python, so I wrote my own implementation. It's similar to scala's, recursive lists.

Code here. There is a few methods - so far works:)

Code here. There is a few methods - so far works:)

## Saturday, 3 December 2016

### Interesting Python Links

Here is a few, I think, worth checking links:

https://boltons.readthedocs.io/en/latest/debugutils.html

http://www.oreilly.com/programming/free/20-python-libraries-you-arent-using-but-should.csp

http://pybee.org/

https://automatetheboringstuff.com/

Enjoy!

https://boltons.readthedocs.io/en/latest/debugutils.html

http://www.oreilly.com/programming/free/20-python-libraries-you-arent-using-but-should.csp

http://pybee.org/

https://automatetheboringstuff.com/

Enjoy!

### Effective Overflow Testing (Bit Hacks II)

The post is in fact the second part of this. I reproduce from the book test for overflow in multiplication two unsigned 32 bit integers, and extended it to unsigned longs (64 bit unsigned). The basic idea is to check sum of leading zeroes (for 64 bits reasoning is the same) in 64 bit factors (when checking 32 bits). When the number is greater or equal than 32 (nlz(x) + nlz(y)) the multipliction does not overflow, when is less than 31 overflows; when equal to 31 is uncertain - additional checks are to be made.

Function is fast, there was no time difference when check run in a loop, precedding multiplication.

Code here (I'm moving from bitbucket, seems these days, everybody has github account from birthday-:)).

Function is fast, there was no time difference when check run in a loop, precedding multiplication.

Code here (I'm moving from bitbucket, seems these days, everybody has github account from birthday-:)).

## Tuesday, 29 November 2016

### Profiling Python Code in Jupyter Notebook

You probably know, that there is a cool stuff: jupyter notebook. I've recently implemented some data structures in python.

Here, there is a, call it, "live session" of profilling one of them in the notebook and I think, notebook is great tool to it.

Here, there is a, call it, "live session" of profilling one of them in the notebook and I think, notebook is great tool to it.

## Sunday, 20 November 2016

### Python Data Structures

Playing with python3, I've written some data structures implementations - AVL tree, double linked list and min heap. Code here.

Anybody learning python welcome to test them, there is also place to implement more functionalities, like min, max in AVL or reverse list.

More code will, hopefully, come soon.

Anybody learning python welcome to test them, there is also place to implement more functionalities, like min, max in AVL or reverse list.

More code will, hopefully, come soon.

## Wednesday, 16 November 2016

### Bit Tricks

After reading some passages from fantastic Henry S. Warren "Hacker's Delights", I've implented and tested few functions. Code with commentary here .

Works faster (sometimes 2 times) than naive implementation, and not worst than functions from math.h, I enabled all compiler optimizations. Feel free to check how it works in java or C#!

Works faster (sometimes 2 times) than naive implementation, and not worst than functions from math.h, I enabled all compiler optimizations. Feel free to check how it works in java or C#!

## Wednesday, 9 November 2016

### Map, filter and reduce in C.

I think, this is useless, but I've written map, filter and reduce in C. Code here. No side effects, pure functions, tail recursive, well... iterative;)

## Monday, 7 November 2016

### Few Benchmarks

I've recently played with ipython notebook, and decided to make some speed comparisons, between java, python and scheme (racket) - can be viewed here.

Edit! After small changes results has changed too.

Edit2! Additional tests here.

Edit! After small changes results has changed too.

Edit2! Additional tests here.

## Saturday, 28 May 2016

## Friday, 22 January 2016

### Persistent Lists

The last post was a bit long and theoretical, this is going to be more practical. The problem is how to implement persistent lists operations, like concatenation or updating an element of list.

First, concatenation, in the imperative way we can easily do it by changing pointers: make a tail of the first list pointing to the head of the second. Lists are concatenated after that, operation performs in constant time, but both lists are destroyed now. If we want to concatenate and have the old lists available, it should be done in that way: make a copy of the last node of the first list and make it pointing to the head of the second, and then make a copy of the before last node of the first list and make it pointing to the last one of the first list and so on... This leads us to the recursive algorithm:

Cons, car, cdr, is-empty, accordingly means: adds element to the beginning of the list, returns the first element, returns the list of all but not the first and is-empty check if a list is empty. After this operation the old lists are still available, but there is a cost: it takes now linear O(n) time - tradeoff persistence for time. This pseudo cod may by easily transformed into Clojure:

Python possible has cdr and head, but I just wrote my own, code available here.

Another essential function is updating element. To do it in functional style we have to: copy the changed element of the lists (this affects node before this element), so we copying node before it and before it, and so on. So, actually we have to copy not the whole list, but all the nodes, that have pointers to the changed element. Algorithm in Clojure:

PS Clojure code here.

First, concatenation, in the imperative way we can easily do it by changing pointers: make a tail of the first list pointing to the head of the second. Lists are concatenated after that, operation performs in constant time, but both lists are destroyed now. If we want to concatenate and have the old lists available, it should be done in that way: make a copy of the last node of the first list and make it pointing to the head of the second, and then make a copy of the before last node of the first list and make it pointing to the last one of the first list and so on... This leads us to the recursive algorithm:

fun-concat(lst1, lst2){ if ( is-empty(lst1)) return lst2 else return cons(car(lst1), fun-concat(cdr(lst1),lst2)) }

Cons, car, cdr, is-empty, accordingly means: adds element to the beginning of the list, returns the first element, returns the list of all but not the first and is-empty check if a list is empty. After this operation the old lists are still available, but there is a cost: it takes now linear O(n) time - tradeoff persistence for time. This pseudo cod may by easily transformed into Clojure:

(defn my-concat [xs ys] (if (empty? xs) ys (cons (first xs) (my-concat (rest xs) ys))))

```
```

```
```

`And Python:`

def myConcat(xs, ys): if(myEmpty(xs)==1): return(ys) else: return([myHead(xs)] + myConcat(myCdr(xs), ys))

Python possible has cdr and head, but I just wrote my own, code available here.

Another essential function is updating element. To do it in functional style we have to: copy the changed element of the lists (this affects node before this element), so we copying node before it and before it, and so on. So, actually we have to copy not the whole list, but all the nodes, that have pointers to the changed element. Algorithm in Clojure:

(defn my-update [xs n ys] (cond (empty? xs) (throw (Exception. "Can't change empty list!")) (= n 0) (cons ys (rest xs)) :else (cons (first xs) (my-update (rest xs) (dec n) ys))))And Python:

def myUpdateElement(xs, n, y): if(myEmpty(xs)==1): raise ValueError("Can't change an empty list") elif(n > (len(xs)-1) or n < 0): raise ValueError("Index Out Of Range") elif(n==0): return([y] + myCdr(xs)) else: return([myHead(xs)] + myUpdateElement(myCdr(xs), dec(n), y))This algorithm is a conditional statement: if list is empty raise error; in the case of changing first element of the lists, just cons new element to the rest of the list (base case of recursion); and when changing different than the first, recursively call function with decremented index - what does the job. That's it, it was the very base introduction to persistent lists operations; that code maybe ported to java or C++, and be use to perform persistent operations on mutable objects (like lists in java and python). Next time trees! Let the source be with you!

PS Clojure code here.

## Saturday, 9 January 2016

### Another Y - Combinator Derrivation

In this post I'm going to readily derive the Y - combinator. The Y - combinator allows an anonymous function to call itself, means it allows anonymous function to recursion. At first glance this impossible, since anonymous functions doesn't have names (so how to call them from inside a function?). But it works and building blocks are very simple. As a language I used scheme - because of simplicity, and this derivation is based on fantastic talk by Jim Weirich (RIP, you were the best!).

Suppose I have the recursive factorial function, it's easy to write, if we can give the function a name:

(define fact (lambda (n) (if (zero? n) 1 (* n (fact (dec n))))))

Dec returns argument minus one. I can't have now anonymous factorial function - without a name - this is impossible for the function to call itself.

(lambda (n) (if (zero? n) 1 (* n (??? (dec n))))))

But what if we try to define a function which creates factorial and then call it with some partial value, which we're going to improve?

(define make-fact (lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n))))))

This is pretty cool, but it's we have no idea what argument use to call make-fact!

Let's try something different, rename it, and call over some partial value (not over the whole domain of factorial) and will try to improve it step by step. First I'm creating function error, to be sure that this function never be called:

(make fact ???)

Let's try something different, rename it, and call over some partial value (not over the whole domain of factorial) and will try to improve it step by step. First I'm creating function error, to be sure that this function never be called:

(define error (lambda (n)

(display "This can't be called")))

And function now:

(define improver (lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n)))))))

Now let's create some higher order functions the (higher order function is a function which returns a function):

(define first (improver error)) (define second (improver first)) (define third (improver second))

The first computes factorial of 0, but not one, the second computes factorial of 1 (and 0), but not 2, and the third... This is not the best way as is seen;-)

To go further we need to recall some notions: Refactoring Principles (code refactoring is process of restructuring the code without changing it's external behavior).

1. Tennant Refactoring Principle: If we surround an expression by the lambda it doesn't change anything, example:

(* x 3) -> ((lambda ()(* x 3)))

2. Introduce binding: If wrap a function call, by another function with arguments which are not bound anywhere in the body of our current call and call that function with some arbitrary argument; it doesn't change anything, example:

(f (g n))-> ((lambda (xuv) (f (g n))) 12343456789)

3. Function Wrapping - if we wrap function by another and call the inner with the wrapper function argument, it doesn't change anything:

(lambda (n) (+ n x)) -> (lambda (s)((lambda (n) (+ n x)) s))

4. Inlining - anywhere when function is called by name , instead of a name of the function, put an actual function body - it doesn't change anything:

(define compose (lambda (f g) (lambda (n) (f (g n))))) (compose dec inc) -> ((lambda (f g) (lambda (n) (f (g n)))) dec inc)

Now let's make some inlining on our

*first, second and third*, name this whole thing and make some more copies the calls:(define fy (improver (improver (improver (improver (improver (improver (improver (improver (improver (improver (improver error))))))))))))

Now by calling fy from zero, one , two,..., we are able to compute factorial up to ten, recursively and anonymously, but obviously that's, again, not what we want.

Nothing new now, we rewrite fy, in the way that it applying improver to partial directly (we have one definition now):

(define fy ((lambda (improver) (improver (improver error))) (lambda (partial) (lambda (n) (if (zero? n) 1 (* n (partial (dec n) )))))))

It was noting new, it can compute factorial from zero: (fy 0) -> 1. But now let's try this: instead of call improver from error, call, in the line 2, improver from improver! There may be no partial now, so we can rename it to improver and, remember that, when partial calls number in the line 5 now function must call function with decremented number(because improver expects function, not a number), so we have now:

(define fy ((lambda (improver) (improver improver)) (lambda (improver) (lambda (n) (if (zero? n) 1 (* n ((improver improver) (dec n) )))))))

Now, after removing definition of the function and changing names:

(((lambda (x) (x x))(lambda (x) (lambda (n) (if (zero? n) 1 (* n ((lambda (v) ((x x) v)) (dec n))))))) 7)

This computes factorial! So it proves, that Lambda Calculus is powerful enough to make recursion and perform computation, but recursion and factorial is mixed; we want to split it, to have a Y, which can be applied to any recursive function. Make a move than! Suppose, that we wrap around the second (x x) (line 4), and use Tennant in the third Lambda:

(((lambda (x) (x x)) (lambda (x) ((lambda () (lambda (n) (if (zero? n) 1 (* n ((lambda (v) ((x x) v)) (dec n))))) )))) 7)

Now we introduce binding. To the "empty" Lambda, we bind: name of the variable will be "code" and is called with the value "error":

(((lambda (x) (x x)) (lambda (x) ((lambda (code) (lambda (n) (if (zero? n) 1 (* n ((lambda (v) ((x x) v)) (dec n)))))) error))) 7)

Because it doesn't matter what value we put instead the value error (it's never used in the inner function, we can legitimate put:

(lambda (v) ((x x) v))

So it's now:

(((lambda (x) (x x)) (lambda (x) ((lambda (code) (lambda (n) (if (zero? n) 1 (* n ((lambda (v) ((x x) v))

(dec n))))))

(lambda (v) ((x x) v))))) 7)

Another move, since the code in the scope of the third Lambda is bind to:

(lambda (v) ((x x) v))

then we are allowed to make some kind of the "reverse inline" and change this expression to "code" in the line 6:

(((lambda (x) (x x)) (lambda (x) ((lambda (code) (lambda (n) (if (zero? n) 1

(* n (code (dec n)))))) (lambda (v) ((x x) v))))) 7)

Renaming "code" to "partial" gives us our original partial factorial definition (still lumped together tough):

(((lambda (x) (x x)) (lambda (x) ((lambda (partial) (lambda (n) (if (zero? n) 1 (* n (partial (dec n))))))

(lambda (v) ((x x) v)) ))) 7)

1. Tennant Refactoring on the whole function. 2 Introduce Binding "code" and "error" again:

(((lambda (code) ((lambda (x) (x x)) (lambda (x) ((lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n)))))) (lambda (v) ((x x) v)) )))) error) 7)

Now, take function:

(lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n))))))

And replace it for error:

(((lambda (code) ((lambda (x) (x x)) (lambda (x) ((lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n)))))) (lambda (v) ((x x) v)) ))))

(lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n))))))) 7)

And again, since:

(lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n))))))

Is now bounded to code ,(in the scope of this function a variable code is bounded to it) we can substitute "code" for this function:

(((lambda (code) ((lambda (x) (x x)) (lambda (x) (code (lambda (v) ((x x) v)) )))) (lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n))))))) 7)

Changing code to improver now:

(((lambda (improver) ((lambda (x) (x x)) (lambda (x) (improver (lambda (v) ((x x) v)) ))))

(lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n))))))) 7)

Recall, that those functions keep computing factorial all way around! We have three lines of code dealing with recursion mechanism and two with factorial, let's tear it out:

(define factorial-improver (lambda (partial) (lambda (n) (if (zero? n) 1

(* n (partial (dec n))))))

(define y (lambda (improver) ((lambda (x) (x x)) (lambda (x) (improver (lambda (v) ((x x) v)) )))))

Y is a Fixed Point Combinator or Applicative Order Y Combinator, computes a

fixed point of the improver function, which is factorial function! We know it because:

It's just another way to write our factorial computing functions. So, Y takes a recursive anonymous function and returns the full - fledged recursive version of it! Which is a fixed point of anonymous function - we see that, because: Let's name:

As a factorial :

And we see, that:

Is still the factorial function (means it's fixed point!).

Now we are going to "tweak" our Y a little bit, to make it looks more like a textbook one;). First, change "improver" to "f":

And then, because (x, x) returns fully fledged factorial and "f" is our "improver", then calling "f" upon it ((x, x)) doesn't change anything (improved factorial is still factorial). And next, I'm doing function wrap around incriminated (x, x):

Notice, both the last lines looks the same! Finally we have more wiki looks like one;) Y - Combinator, sometimes called Z - Combinator. Existence of this function is shocking for me, this must be very important fact, that something so simple, like the Lambda Calculus, is powerful enough to make computation, recursion, to build algorithms! This is a screenshot of the last few lines of my notes to this post - with computed factorial from 1550 (few first digits):

Thank you for patience!

fixed point of the improver function, which is factorial function! We know it because:

((y factorial-improver) 7)

It's just another way to write our factorial computing functions. So, Y takes a recursive anonymous function and returns the full - fledged recursive version of it! Which is a fixed point of anonymous function - we see that, because: Let's name:

((y factorial-improver) 7)

As a factorial :

(define fact (y factorial-improver))

And we see, that:

(factorial-improver fact)

Is still the factorial function (means it's fixed point!).

Now we are going to "tweak" our Y a little bit, to make it looks more like a textbook one;). First, change "improver" to "f":

(define y (lambda (f) ((lambda (x) (x x)) (lambda (x) (f (lambda (v) ((x x) v)) )))))

And then, because (x, x) returns fully fledged factorial and "f" is our "improver", then calling "f" upon it ((x, x)) doesn't change anything (improved factorial is still factorial). And next, I'm doing function wrap around incriminated (x, x):

(define y (lambda (f) ((lambda (x) (f (lambda (v) ((x x) v)))) (lambda (x) (f (lambda (v) ((x x) v)))))))

Notice, both the last lines looks the same! Finally we have more wiki looks like one;) Y - Combinator, sometimes called Z - Combinator. Existence of this function is shocking for me, this must be very important fact, that something so simple, like the Lambda Calculus, is powerful enough to make computation, recursion, to build algorithms! This is a screenshot of the last few lines of my notes to this post - with computed factorial from 1550 (few first digits):

Thank you for patience!

Subscribe to:
Posts (Atom)