tag:blogger.com,1999:blog-89872057995930247682024-03-13T12:55:17.965-07:00Neurogeek's BlogNeurogeekhttp://www.blogger.com/profile/09314343109652247104noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-8987205799593024768.post-73650712831034406442011-11-10T15:00:00.000-08:002011-11-10T15:00:09.596-08:00GIT and self-signed certificates (GIT over HTTPS)I've seen some people over the internet having problems with GIT repos over HTTPS and self-signed certificates. The GIT gurus explain that as GIT checks the validity of the certificates, you have to tell GIT, somehow, to ignore this check thus allowing you to pull/push from these repos.<br />
<br />
The usual way to achieve this is by exporting the GIT_SSL_NO_VERIFY=true. This works for a lot of situations and systems, but there's another way to achieve this behavior. In your .git/config, you can create a new section (if you don't have it already) called <b>http</b> and declare a variable <b>sslVerify</b> and set it to <b>false</b>. Like this:<br />
<br />
from .git/config<br />
<br />
<blockquote class="tr_bq">[http]<br />
sslVerify = false</blockquote>This way, this particular working copy of the repo won't ever check for valid certificates and you don't have to modify your environment. This is specially useful for Windows devs using repos with Tortoise GIT or the Visual Studio GIT plugin.<br />
<br />
Hope this helps.Neurogeekhttp://www.blogger.com/profile/09314343109652247104noreply@blogger.com0tag:blogger.com,1999:blog-8987205799593024768.post-60029902972653874692011-10-03T07:15:00.000-07:002011-10-03T07:15:53.186-07:00Python article in Hakin9 MagazinePatrycja Przybylowicz, the fine editor of the Hakin9 Magazine, contacted me some days ago letting me know that my article about using Encrypted Python modules with Import Hooks, was going out in the October, 2011 issue.<br />
<br />
Well, that issue is now out and you can see the preview here: http://hakin9.org/hack-apple-1011<br />
<br />
I'm pretty happy about this, I've always loved that magazine and being part of it, its a great deal for me.Neurogeekhttp://www.blogger.com/profile/09314343109652247104noreply@blogger.com0tag:blogger.com,1999:blog-8987205799593024768.post-9640694394880068822011-09-19T06:34:00.000-07:002012-12-03T19:53:18.471-08:00Libodbc++ Mini TutorialAs some of you know, I'm as obsessed over C++ as I'm crazy about Python. C++ programs (as a good friend told me once) are like art masterpieces: elegant, classy and not everybody can understand them ;).<br />
<div align="left">
<br />
<br /></div>
I've been developing Python software for some time now and I've come across pretty good libraries and modules. One of them is Django, and specially it's ORM. Django is a great piece of software that allows you to build awesome web applications faster than saying “I love Perl's catalyst”. The ORM is also awesome on its own, letting you, amongst other neat things, write connectors that integrate with its codebase pretty easy (sometime I'll write about the iSeries connector I wrote, with introspection and select clauses). <br />
<div align="left">
<br />
<br /></div>
Why did I start this post talking about C++? Well, the reason why is because I want an ORM for C++. Obviously, I know I'm not going to get anywhere near a Django ORM or Ruby's ActiveRecord, but something can be done. I'm aware of some libraries that provide some ORM functionality already like LiteSQL (not the same as SQLite, of course), QxORM, ODB, Hiberlite, etc. but I really didn't dig any of them because some of them only support one RDBMS (eg. SQLite or PostgreSQL), some can only be used with Qt (ouch!), some require their unique type of mapping file specification, or use a kind of magic that simply is more like voodoo than anything else.<br />
<div align="left">
<br />
<br /></div>
So, I set my mind to design and develop an ORM library for C++, that will sit on top of ODBC (so we can resolve the any RDBMS problem), and will make use libodbc++ to connect to the underlaying ODBC implementation (in my case, unixODBC). The name of this ORM will be SeORM, as in SuppaEZ-ORM (expect a github push anytime soon), and it's aim is to provide an easy-to-use, and easy-to-integrate ORM library so you can (initially) provide basic Object to DB communication (however, don't expect a full blown ORM, cause it just won't happen).<br />
<div align="left">
<br />
<br /></div>
Although this writing served to announce it, the original motive for this post was to talk a bit about libodbc++, instead of telling you about SeORM. While researching libodbc++, I came across a ton of threads and questions regarding the usage of libodbc++. It seems like a full tutorial is missing and that basic usage must be derived from other ODBC libraries (mostly in C) or from the source code for libodbc++. So, I thought about writing a small tutorial (that will be followed by a more complete one, later on), on using libodbc++.<br />
<div align="left">
<br />
<br /></div>
<b><span style="font-size: large;">Small tutorial on libodbc++</span></b><br />
<div align="left">
<br />
<br /></div>
As the only RDBMS I have available is PostgreSQL (do you need something else?), I will base this tutorial on using libodbc++ to communicate with PostgreSQL. If you are using another RDBMS, you can change the PostgreSQL specific parts and suit it up to your tech.<br />
<div align="left">
<br /></div>
Let's create a DB with a basic table:<br />
<div align="left">
<br />
<br /></div>
<pre>template1=# CREATE DATABASE test;
CREATE DATABASE
template1=# \c test
You are now connected to database "test".
test=# CREATE TABLE temp (id int, name varchar(100));
CREATE TABLE
</pre>
<div align="left">
</div>
<br />
So, by now we should have a new DB and a table called temp with two columns (one integer and a varchar one).<br />
<div align="left">
<br />
<br /></div>
You are free now to make some inserts there, and/or create a user to manage your new DB. As this is a small test, I'll leave postgres as the user to connect to the RDBMS.<br />
<div align="left">
<br />
<br /></div>
Now, we are going to need our odbc*.ini files. Let's start with the odbcinst.ini which is the file that holds the locations and definitions for the ODBC drivers.<br />
<div align="left">
<br />
<br /></div>
<span style="color: lime;">neurogeek@kafka ~ $</span> cat /etc/unixODBC/odbcinst.ini <br />
<br />
<pre>[ODBC]
Trace=yes
TraceFile=/tmp/odbc_log.txt
[PostgreSQL]
driver=/usr/lib/psqlodbcw.so
setup=/usr/lib/psqlodbcw.so
</pre>
<br />
The <b>[ODBC]</b> section is global, and this one tells unixODBC (BTW, I'm using unixODBC), that it should trace all calls as to debug ODBC connections. If you don't want to debug your connections or you are setting this in production mode, just omit this section (*PLEASE*).<br />
<div align="left">
<br /></div>
The <b>[PostgreSQL]</b> basically tells unixODBC that you have a driver that you are going to be referencing using that name (PostgreSQL) and that the shared library for that driver is located at /usr/lib/psqlodbcw.so.<br />
<br />
That's our odbcinst.ini. Now, let's create our DSN.<br />
<br />
<span style="color: lime;">neurogeek@kafka ~ $</span> cat /etc/unixODBC/odbc.ini <br />
<div align="left">
</div>
<pre>[TEST]
<div align="left">
</div>
driver=PostgreSQL
<div align="left">
</div>
Servername=localhost
<div align="left">
</div>
Username=postgres
<div align="left">
</div>
Password=********
<div align="left">
</div>
Port=5432
<div align="left">
</div>
Database=test
<div align="left">
</div>
<div align="left">
</div>
</pre>
<br />
Here's my odbc.ini for this tutorial. You define a <b>[TEST]</b> section (just a name, can be anything you want except ODBC, I guess). You write the driver, Servername, Username, Password, Port, Database information (pretty self-explanatory) and you are set. As per ODBC tutorials and articles, you could omit some of these configuration parameters and provide them when you are creating the connection in the code.<br />
<div align="left">
<br />
<br /></div>
Now that we have everything set up, at least ODBC-wise, let's start by emerging libodbc++.<br />
<div align="left">
<br />
<br /></div>
<span style="color: lime;">neurogeek@kafka ~ $</span> emerge -av libodbc++<br />
<div align="left">
<br />
<br /></div>
These are the packages that would be merged, in order:<br />
Calculating dependencies... done!<br />
<div align="left">
<br />
<br /></div>
[ebuild R ~] dev-db/libodbc++-0.2.5-r1 0 kB [1]<br />
<div align="left">
<br />
<br /></div>
Total: 1 package (1 reinstall), Size of downloads: 0 kB<br />
Portage tree and overlays:<br />
[0] /usr/portage<br />
[1] /usr/local/portage<br />
<div align="left">
<br />
<br /></div>
Once we emerge this library, we are going to have access to libodbc++.so.<br />
<div align="left">
<br />
<br /></div>
Our test C++ program is below. What it does is: get access to a DriverManager instance and open a Connection to a given DSN (here you could specify some other parameters like password, if you didn't provide it in the odbc.ini config file).<br />
<div align="left">
<br />
<br /></div>
Then you create a query (statement) and execute it, gettin a ResultSet you can iterate and get the results of your columns.<br />
<br />
<br />
<pre>//============================<wbr></wbr>==============================<wbr></wbr>======
// Name : OdbcTest.cpp
// Author : Jesus Rivero (Neurogeek) //<<a href="mailto:jesus.riveroa@gmail.com" target="_blank">jesus.riveroa@gmail.com</a> ,
// <a href="mailto:neurogeek@gentoo.org" target="_blank">neu<wbr></wbr>rogeek@gentoo.org</a>>
// Version : 0.1
// Copyright : 2011 Jesus Rivero
// Description : Example on how to use ODBC in C++
//============================<wbr></wbr>=====================================
<div align="left">
</div>
#include <iostream></iostream>
#include <odbc resultset.h=""></odbc>
#include <odbc preparedstatement.h=""></odbc>
#include <odbc drivermanager.h=""></odbc>
#include <odbc databasemetadata.h=""></odbc>
using namespace std;
using namespace odbc;
int main(int argc, char **argv)
{
//Get the Driver manager
<div align="left">
DriverManager *dm;</div>
//Open the connection, specifiying the DSN.
Connection *c = dm->getConnection("DSN=Prueba"<wbr></wbr>);
//Create the Query
PreparedStatement *s = c->prepareStatement(
ODBCXX_<wbr></wbr>STRING_CONST("SELECT id, name FROM temp"));
<div align="left">
</div>
ResultSet *r;
//Execute the Query
s->execute();
<div align="left">
</div>
//Get initial ResultSet
r = s->getResultSet();
<div align="left">
</div>
while(r->next())
{
<div align="left">
</div>
//Extract column information
cout << "Column (Id): " << r->getInt("id") << endl;
cout << "Column (Name): " << r->getString("name") << endl;
}
//Clean everything
delete r;
delete s;
delete c;
return 0;
}
</pre>
<br />
<br />
<div align="left">
<br /></div>
There are a lot of other things you could do with libodbc++, like accessing <i>Database</i> and <i>Table</i> metadata (yes like column names and types) and other pretty neat stuff.<br />
<div align="left">
<br />
<br /></div>
I hope this serves you well. As soon as I advance with SeORM, I'll post more stuff about it and libodbc++.<br />
<div align="left">
<br />
<br /></div>
Happy coding!Neurogeekhttp://www.blogger.com/profile/09314343109652247104noreply@blogger.com5tag:blogger.com,1999:blog-8987205799593024768.post-80459069488844672682011-08-30T14:29:00.000-07:002011-08-31T05:23:00.850-07:00Google Summer of Code 2011Once again, this year I participated in the Google Summer of Code as a mentor for Gentoo. The project was called Autodep and its aim was to produce a tool to check the DEPEND and RDEPEND of ebuilds automatically, featuring things like blocking access to non-dependancy files from packages. <br />
<br />
It turned out quite well and the student, Alexander Bersenev, worked really hard to finish the project. He is also using one of the clusters he has access to in his university to check the whole portage tree for packages missing dependencies. Quite cool, isn't it?.<br />
<br />
You can find the project code <a href="http://git.overlays.gentoo.org/gitweb/?p=proj/autodep.git;a=summary">Here</a> and the documentation <a href="http://www.blogger.com/guidexml/index.xml">Here</a><br />
<br />
BTW, to create the GuideXML document for this project, I wrote a plugin for Sphinx that throws formatted GuideXML document from the docs. You can find it in <a href="https://github.com/neurogeek/sphinx_guidexml">my GitHub</a>Neurogeekhttp://www.blogger.com/profile/09314343109652247104noreply@blogger.com0tag:blogger.com,1999:blog-8987205799593024768.post-74750000500022471422011-08-30T11:03:00.000-07:002011-08-30T11:03:36.181-07:00Django Dia exporterToday, I adapted an old piece of code I had laying around in my personal GIT. I wrote this code some time ago when I needed to transform an UML diagram I designed, using the awesome <a href="http://live.gnome.org/Dia">DIA</a>, into a Django models file.<br />
<br />
All you need to do is take this code and put it somewhere in ${DIA_SHARE_FOLDER}/python/codegen.py (where DIA_SHARE_FOLDER is generally /usr/share/dia), add an import re at the beginning of that file and this:<br />
<br />
<pre>dia.register_export ("PyDia Code Generation (Django)", </pre><pre>"py", DjangoRenderer())
</pre><br />
at the very end (together with the other generators).<br />
<br />
The code is:<br />
<br />
<pre>class DjangoRenderer(ObjRenderer) :
mapp = {"Datetime" : "DateTime"}
def __init__(self) :
ObjRenderer.__init__(self)
self.processed_kls = set([])
def end_render(self) :
re_foreign = re.compile(r"[cC]lass\s*(.*).*")
field_str = "\t%s = models.%sField(%s) %s\n"
foreign_str = "\t%s = models.ForeignKey('%s'%s) %s\n"
f = open(self.filename, "w")
f.write("from django.db import models %s" % ("\n" * 3))
for sk in self.klasses.keys() :
parents = self.klasses[sk].parents + self.klasses[sk].templates
f.write ("class %s (models.Model):\n" % (sk,))
kls_attributes = dict(self.klasses[sk].attributes)
attributes = kls_attributes.keys()
if attributes.__len__() == 0:
f.write("\tpass\n\n")
else:
for sa in attributes :
comments = ""
attr = kls_attributes[sa]
value = attr[2] == "" and "None" or attr[2]
comments = attr[3]
if attr[3]:
comments = "#%s" % attr[3].replace('\n', ' ')
try:
ty, parms = [x.strip() for x in attr[0].split("|")]
except ValueError:
ty, parms = (attr[0], "")
mobj = re_foreign.match(ty)
if mobj:
foreign_obj = mobj.groups()[0]
f.write(foreign_str % (sa, foreign_obj, </pre><pre>parms, comments))
else:
ty = ty.capitalize()
ty = self.mapp.get(ty, ty)
f.write(field_str % (sa, ty, parms, comments))
else:
f.write("\n" * 2)
f.close()
ObjRenderer.end_render(self)
</pre><br />
Now. The only thing to take into consideration is, when designing in DIA, you have to be a bit more specific in the Type of properties of your classes.<br />
<br />
For instance. If you want a property to be of type string, you should specify it as:<br />
<br />
<pre>Char | max_length=200, null=True.
</pre><br />
This tells the exporter that it should construct a<br />
<br />
<pre>models.CharField(max_length=200, null=True)
</pre><br />
If you want to specify a ForeignKey, the type should be:<br />
<pre>Class SomeClass
</pre> or if you have options (like related_name), this should be:<br />
<pre>Class SomeClass | related_name=some.
</pre><br />
Hope this serves you somehow.Neurogeekhttp://www.blogger.com/profile/09314343109652247104noreply@blogger.com1