jueves, 10 de noviembre de 2011

GIT and self-signed certificates (GIT over HTTPS)

I've seen some people over the internet having problems with GIT repos over HTTPS and self-signed certificates. The GIT gurus explain that as GIT checks the validity of the certificates, you have to tell GIT, somehow, to ignore this check thus allowing you to pull/push from these repos.

The usual way to achieve this is by exporting the GIT_SSL_NO_VERIFY=true. This works for a lot of situations and systems, but there's another way to achieve this behavior. In your .git/config, you can create a new section (if you don't have it already) called http and declare a variable sslVerify and set it to false. Like this:

from .git/config

    sslVerify = false
This way, this particular working copy of the repo won't ever check for valid certificates and you don't have to modify your environment. This is specially useful for Windows devs using repos with Tortoise GIT or  the Visual Studio GIT plugin.

Hope this helps.

lunes, 3 de octubre de 2011

Python article in Hakin9 Magazine

Patrycja  Przybylowicz, the fine editor of the Hakin9 Magazine, contacted me some days ago letting me know that my article about using Encrypted Python modules with Import Hooks, was going out in the October, 2011 issue.

Well, that issue is now out and you can see the preview here: http://hakin9.org/hack-apple-1011

I'm pretty happy about this, I've always loved that magazine and being part of it, its a great deal for me.

lunes, 19 de septiembre de 2011

Libodbc++ Mini Tutorial

As some of you know, I'm as obsessed over C++ as I'm crazy about Python. C++ programs (as a good friend told me once) are like art masterpieces: elegant, classy and not everybody can understand them ;).

I've been developing Python software for some time now and I've come across pretty good libraries and modules. One of them is Django, and specially it's ORM. Django is a great piece of software that allows you to build awesome web applications faster than saying “I love Perl's catalyst”. The ORM is also awesome on its own, letting you, amongst other neat things, write connectors that integrate with its codebase  pretty easy (sometime I'll write about the iSeries connector I wrote, with introspection and select clauses). 

Why did I start this post talking about C++? Well, the reason why is because I want an ORM for C++. Obviously, I know I'm not going to get anywhere near a Django ORM or Ruby's ActiveRecord, but something can be done. I'm aware of some libraries that provide some ORM functionality already like LiteSQL (not the same as SQLite, of course), QxORM, ODB, Hiberlite, etc. but I really didn't dig any of them because some of them only support one RDBMS (eg. SQLite or PostgreSQL), some can only be used with Qt (ouch!), some require their unique type of mapping file specification, or use a kind of magic that simply is more like voodoo than anything else.

So, I set my mind to design and develop an ORM library for C++, that will sit on top of ODBC (so we can resolve the any RDBMS problem), and will make use libodbc++ to connect to the underlaying ODBC implementation (in my case, unixODBC). The name of this ORM will be SeORM, as in SuppaEZ-ORM (expect a github push anytime soon), and it's aim is to provide an easy-to-use, and easy-to-integrate ORM library so you can (initially) provide basic Object to DB communication (however, don't expect a full blown ORM, cause it just won't happen).

Although this writing served to announce it,  the original motive for this post was to talk a bit about libodbc++, instead of telling you about SeORM. While researching libodbc++, I came across a ton of threads and questions regarding the usage of libodbc++. It seems like a full tutorial is missing and that basic usage must be derived from other ODBC libraries (mostly in C) or from the source code for libodbc++. So, I thought about writing a small tutorial (that will be followed by a more complete one, later on), on using libodbc++.

Small tutorial on libodbc++

As the only RDBMS I have available is PostgreSQL (do you need something else?), I will base this tutorial on using libodbc++ to communicate with PostgreSQL. If you are using another RDBMS, you can change the PostgreSQL specific parts and suit it up to your tech.

Let's create a DB with a basic table:

template1=# CREATE DATABASE test;
template1=# \c test
You are now connected to database "test".
test=# CREATE TABLE temp (id int, name varchar(100));

So, by now we should have a new DB and a table called temp with two columns (one integer and a varchar one).

You are free now to make some inserts there, and/or create a user to manage your new DB. As this is a small test, I'll leave postgres as the user to connect to the RDBMS.

Now, we are going to need our odbc*.ini files. Let's start with the odbcinst.ini which is the file that holds the locations and definitions for the ODBC drivers.

neurogeek@kafka ~ $ cat /etc/unixODBC/odbcinst.ini



The [ODBC] section is global, and this one tells unixODBC (BTW, I'm using unixODBC), that it should trace all calls as to debug ODBC connections. If you don't want to debug your connections or you are setting this in production mode, just omit this section (*PLEASE*).

The [PostgreSQL] basically tells unixODBC that you have a driver that you are going to be referencing using that name (PostgreSQL) and that the shared library for that driver is located at /usr/lib/psqlodbcw.so.

That's our odbcinst.ini. Now, let's create our DSN.

neurogeek@kafka ~ $ cat /etc/unixODBC/odbc.ini

Here's my odbc.ini for this tutorial. You define a [TEST] section (just a name, can be anything you want except ODBC, I guess). You write the driver, Servername, Username, Password, Port, Database information (pretty self-explanatory) and you are set. As per ODBC tutorials and articles, you could omit some of these configuration parameters and provide them when you are creating the connection in the code.

Now that we have everything set up, at least ODBC-wise, let's start by emerging libodbc++.

neurogeek@kafka ~ $ emerge -av libodbc++

These are the packages that would be merged, in order:
Calculating dependencies... done!

[ebuild   R   ~] dev-db/libodbc++-0.2.5-r1  0 kB [1]

Total: 1 package (1 reinstall), Size of downloads: 0 kB
Portage tree and overlays:
[0] /usr/portage
[1] /usr/local/portage

Once we emerge this library, we are going to have access to libodbc++.so.

Our test C++ program is below. What it does is: get access to a DriverManager instance and open a Connection to a given DSN (here you could specify some other parameters like password, if you didn't provide it in the odbc.ini config file).

Then you create a query (statement) and execute it, gettin a ResultSet you can iterate and get the results of your columns.

// Name        : OdbcTest.cpp
// Author      : Jesus Rivero (Neurogeek) //<jesus.riveroa@gmail.com ,
//                   neurogeek@gentoo.org>
// Version     : 0.1
// Copyright   : 2011 Jesus Rivero
// Description : Example on how to use ODBC in C++
#include #include #include #include #include using namespace std; using namespace odbc; int main(int argc, char **argv) {     //Get the Driver manager
    DriverManager *dm;
    //Open the connection, specifiying the DSN.     Connection *c = dm->getConnection("DSN=Prueba");     //Create the Query     PreparedStatement *s = c->prepareStatement(                  ODBCXX_STRING_CONST("SELECT id, name FROM temp"));
    ResultSet *r;     //Execute the Query     s->execute();
    //Get initial ResultSet     r = s->getResultSet();
    while(r->next())     {
        //Extract column information         cout << "Column (Id): " << r->getInt("id") << endl;         cout << "Column (Name): " << r->getString("name") << endl;     }     //Clean everything     delete r;     delete s;     delete c;         return 0; }

There are a lot of other things you could do with libodbc++, like accessing Database and Table metadata (yes like column names and types) and other pretty neat stuff.

I hope this serves you well. As soon as I advance with SeORM, I'll post more stuff about it and libodbc++.

Happy coding!

martes, 30 de agosto de 2011

Google Summer of Code 2011

Once again, this year I participated in the Google Summer of Code as a mentor for Gentoo. The project was called Autodep and its aim was to produce a tool to check the DEPEND and RDEPEND of ebuilds automatically, featuring things like blocking access to non-dependancy files from packages.

It turned out quite well and the student, Alexander Bersenev, worked really hard to finish the project. He is also using one of the clusters he has access to in his university to check the whole portage tree for packages missing dependencies. Quite cool, isn't it?.

You can find the project code Here and the documentation Here

BTW, to create the GuideXML document for this project, I wrote a plugin for Sphinx that throws formatted GuideXML document from the docs. You can find it in my GitHub

Django Dia exporter

Today, I adapted an old piece of code I had laying around in my personal GIT. I wrote this code some time ago when I needed to transform an UML diagram I designed, using the awesome DIA, into a Django models file.

All you need to do is take this code and put it somewhere in ${DIA_SHARE_FOLDER}/python/codegen.py (where DIA_SHARE_FOLDER is generally /usr/share/dia), add an import re at the beginning of that file and this:

dia.register_export ("PyDia Code Generation (Django)", 
"py", DjangoRenderer()) 

at the very end (together with the other generators).

The code is:

class DjangoRenderer(ObjRenderer) :
    mapp = {"Datetime" : "DateTime"}

    def __init__(self) :

        self.processed_kls = set([])

    def end_render(self) :
        re_foreign = re.compile(r"[cC]lass\s*(.*).*")
        field_str = "\t%s = models.%sField(%s) %s\n"
        foreign_str = "\t%s = models.ForeignKey('%s'%s) %s\n"

        f = open(self.filename, "w")
        f.write("from django.db import models %s" % ("\n" * 3))
        for sk in self.klasses.keys() :
            parents = self.klasses[sk].parents + self.klasses[sk].templates
            f.write ("class %s (models.Model):\n" % (sk,))
            kls_attributes = dict(self.klasses[sk].attributes)
            attributes = kls_attributes.keys()

            if attributes.__len__() == 0:
                for sa in attributes :

                    comments = ""

                    attr = kls_attributes[sa]
                    value = attr[2] == "" and "None" or attr[2]
                    comments = attr[3]

                    if attr[3]:
                        comments = "#%s" % attr[3].replace('\n', ' ')
                        ty, parms = [x.strip() for x in attr[0].split("|")]
                    except ValueError:
                        ty, parms = (attr[0], "")

                    mobj = re_foreign.match(ty)

                    if mobj:
                        foreign_obj = mobj.groups()[0]
                        f.write(foreign_str % (sa, foreign_obj, 
parms, comments))
                        ty = ty.capitalize()
                        ty = self.mapp.get(ty, ty)
                        f.write(field_str % (sa, ty, parms, comments))
                    f.write("\n" * 2)

Now. The only thing to take into consideration is, when designing in DIA, you have to be a bit more specific in the Type of properties of your classes.

For instance. If you want a property to be of type string, you should specify it as:

Char | max_length=200, null=True.

This tells the exporter that it should construct a

models.CharField(max_length=200, null=True)

If you want to specify a ForeignKey, the type should be:
Class SomeClass
or if you have options (like related_name), this should be:
Class SomeClass | related_name=some.

Hope this serves you somehow.