Difference between revisions of "GEPS 010: Relational Backend"
(→Database Abstraction Layer: DB-API notes)
(→DB-API: more notes)
|Line 151:||Line 151:|
*ORM (Object Relational Mapper): no
*ORM (Object Relational Mapper): no
=== <strike> Django </strike> ===
=== <strike> Django </strike> ===
Revision as of 21:15, 25 March 2009
This page is for the discussion of a proposed implementation of a SQL backend for GRAMPS.
A proposed implementation is being developed in trunk/src/plugins/Sql.py You can export most of GRAMPS through an SQL Export using the Export Assistant. (Currently, the selection options are ignored, and it will output everything).
SQL stands for "Structured Query Language" and is pronounced "sequel" (it is a joke: as it came after QUEL, it is its sequel). After you export your GRAMPS data into a file such as Untitled_1.sql using the above Exporter, then you can use SQL queries like:
$ sqlite3 Untitled_1.sql SQLite version 3.5.9 Enter ".help" for instructions sqlite> .tables dates family names people repository events media notes places sources sqlite> .headers on .headers on sqlite> select * from people; handle|gramps_id|gender|death_ref_index|birth_ref_index|change|marker0|marker1|private b247d7186567ff472ef|I0000|1|-1|-1|1225135132|-1||0 sqlite> select * from names where surname like "%Smith%"; private|first_name|surname|suffix|title|name_type0|name_type1|prefix|patronymic|group_as|sort_as|display_as|call 0|Test|Smith|||2|||||0|0| sqlite> .exit $
The current database in GRAMPS would require that you write some code to do this, and you'd need to know some details about the data.
- 1 SQL Backend
- 2 What now?
Currently, GRAMPS uses a BSD database as its internal file format. While this is considerably better than, say, an XML format, the choice of the BSD-DB has a considerable number of drawbacks. This proposal explores the use of SQL a replacement. This should allow easy, single db file implementations (eg, SQLite) to more complex and sophisticated client/server (eg, MySQL).
First, there are a number of facts related to this proposal:
- BSDDB is being removed from the standard distribution of Python (as of Python 2.6)
- SQLITE is being added to the standard Python distribution
- BSDDB is not a relational database, but a hierarchical one
- BSDDB databases do not have schema or data-definitions. BSDDB requires all of the database structure logic to reside in code
- BSDDB is a programmer's API
- SQL is a declarative, independent abstraction layer
- SQL can optimize queries (in low-level C) whereas BSDDB is done in Python
- SQLite tables of a database reside in a single file
Next, are a number of claims that need to be tested:
- An SQLite version of a GRAMPS BSDDB may be 4 times smaller
- An SQLite version of a GRAMPS BSDDB may be faster
- The files may be smaller
- The smaller files may allow more into memory
- More code would reside in C, rather than in Python
- SQL Engines can perform query optimizations
- Enterprise SQL versions of GRAMPS would allow people to create and manage much larger trees
- An SQLite version of GRAMPS might allow people to create larger trees
- Because we move all of the DB logic into SQL, we can focus on making GRAMPS stable with large databases
- SQL code is simpler than the equivalent BSDDB code, because SQL is declarative/abstract and BSDDB is a low-level API
- A fullscale MySQL backend would be a trivial step from SQLite (although maybe harder to setup and maintain; although see Django)
- Easy to allow multiple users in a SQLite database (uses file-locking)
- There is a lot of code that we have written dealing with BSDDB. It would have to all be rewritten in SQL (on the other hand, a lot of code can be deleted, which will make GRAMPS easier to maintain and adapt)
- We will have to develop SQL experts
It's good to see this discussion on gramps and is actually why I'm thinking of giving it another try depending on how hard it is to implement this. Yes I know it will be hard but probably much easier and productive than starting my own project. I'm a developer my self and when it came time to evaluate gramps the lack of a relational db backend was one of the main reasons I decided to keep looking. Don't discount MySQL over SQLite. While I haven't tried it out yet there is an embeddable version of MySQL which might overcome some of sqlites advantages. If a database abstraction layer is used both could be easily supported. They both have their advantages and disadvantages. MySQL Advantages *far better tools for management and reporting *a true enterprise level database capable of handling serious loads *far more is built into the db. ie auto incrementing fields, stored procedures and on and on. (sqlite may not even have triggers but I can't remember) *far more extensive user base and support. Disadvantages *install size (bloat) *an actual server to setup run and maintain. ** there are tools that can do this automatically though and make things almost none existent for an end user. also the embeddable mysql might be an option. *may be difficult to manage / share multiple databases. more difficult but very do able. maybe not even that difficult. it would just take some planning. SQLite Advantages *far easier to setup. just start writing to the file! no connection or user accounts. *smaller install (code) size. *easier for users to manage / and share sepperate db's *single file *good support. Disadvantage *while great for what it is it's not an enterprise level database *many "traditional" relational db things are lacking. *while tools exist they aren't as fleshed out and solid as the mysql ones. Personally I think SQLite makes more sense for genealogical software. but mysqls tools and the fact that it's a "real" enterprise level relational db are serious advantages. -- AaronS
Single-file Cross-platform Database
A database in SQLite is a single disk file. Furthermore, the file format is cross-platform. A database that is created on one machine can be copied and used on a different machine with a different architecture. SQLite databases are portable across 32-bit and 64-bit machines and between big-endian and little-endian architectures.
The SQLite database file format is also stable. All releases of of SQLite version 3 can read and write database files created by the very first SQLite 3 release (version 3.0.0) going back to 2004-06-18. This is "backwards compatibility". The developers promise to maintain backwards compatibility of the database file format for all future releases of SQLite 3. "Forwards compatiblity" means that older releases of SQLite can also read and write databases created by newer releases. SQLite is usually, but not completely forwards compatible.
The stability of the SQLite database file format and the fact that the file format is cross-platform combine to make SQLite database files an excellent choice as an Application File Format.
The Single disk file of sqlite db would be a major selling point for sqlite for genealogy software since users share and compare db's all the time. --Aarons
Database Abstraction Layer
I asked this question on StackOverflow.
Pylons took the time to be able to use it.
This is not a db abstraction layer and not even a relational db.
Python has an API to make it easy to move from one SQL-based DB to another called DB-API. Each DB may have multiple different modules available for it. If we settle on this solution then we should do some quick searches to make sure we pick the right modules.
- MySQL: Yes MySQLDB used by SQLAlchemy
- SQLite: Yes sqlite3 (included in Python 2.5 or greater) pysqlite both used by SQLAlchemy
- BSDDB: No. The DB-API looks to be only for relational dbs.
- ORM (Object Relational Mapper): no
"While all DB-API modules have identical APIs (or very similar; not all backends support all features), if you are writing the SQL yourself, you will probably be writing SQL in a product-specific dialect, so they are not as interchangeable in practice as they are in theory." kquinn
Django also provides DB independence, but is geared towards web deployment:
"Django developed its ORM (and template language) from scratch. While that may have been a pragmatic decision at the time, Python now has SQLAlchemy, a superior database layer that has gained a lot of momentum. Django’s in-house ORM lacks multiple database support, and forces constraints on your database models (e.g. that every database table must have a single, integer primary key). If you choose Django, your project gains a near-inseparable dependency on Django’s ORM and database requirements." marcus cavanaugh 2009
Django's DB abstraction probably isn't a good fit. While powerful I doubt any projects are using it outside of Django.
doesn't look viable and looks like limited users and documentation. tied to a webframework.
- MySQL: Yes
- SQLite: Yes
- BSDDB: No.
- ORM (Object Relational Mapper): yes but doesn't force it.
- mulitple database support?: maybe, need to confirm.
- Viability: Last release was January 24, 2009. They seem to have an established development team and user base. Project appears to be 3 years old,
"SQLAlchemy is designed to operate with a DB-API implementation built for a particular database" source
"SQLAlchemy, widely considered to be the best Python ORM available. SQLAlchemy includes multiple database support and just about any crazy combination of database requirements needed, and it handles ORM very well — yet it also allows you to provide raw SQL as needed." marcus cavanaugh 2009
- MySQL: yes
- SQLite: yes
- BSDDB: no
- ORM (Object Relational Mapper): yes this is what sqlobject is all about.
- mulitple database support?: ?
- Viability: Last release was 2008-12-08, 7 developers on the project. couldn't find old release dates but first was posted 2003-04-09. link to wiki is broken which isn't the best sign but we've all had those times before...
Pylons took the time to be able to use it.
I suspect that we would have something like SQLite as a default, but allow experts to move to more sophisticated backends.
It is quite powerful, but perhaps more sophisticated than what we need. I think we want to find the right balance between power and dependencies.
What you looking for here is called a Database Abstraction Layer they are indeed quite powerful and are exactly what you need. if your going to bother switching the back end don't waste your time and not use one. you'll kick yourself later if you don't. just be careful which one you choose. I know that in php every web framework seems to have their own. I suspect the same for python. Django has their own but allows for the use of others (if that tells you anything). might be a place to check for alternatives. While their framework might be for websites that shouldn't matter for the DB Abstraction layer. What to look for in a db Abstraction Layer is which dbs it can use. sqlite and mysql are musts, you may even find one that can talk to BSDDB but probably not. Oracle and PostgreSQL are pluses but will probably never be used but who knows what will happen in 5 or 10 yrs. who knows maybe oracle would get fed up with mysql and release the db open source charging for service. stranger things have happened. ease of use, readablity and outer joins are also important. don't worry too much about how complex of sql queries its supposed to allow you to create since complex queries through a db layer tend to be difficult to create, read and predict. ie sub queries and the like. usually those queries are far easier to just build as a query. in my experience a db abstraction layer is good for most of the db io. however, for the complex stuff a sort of localization object (or even file) is a good bet with named queries. this would work similar to how different languages are usually supported in projects. with a different object or file per db. I'd recommend an actual object with a function per query over a file of constants/variables since some db's might require a little more manipulation than others. again this would only be for the most complex queries. a good rule of thumb would be if you had to start writing parts of the query as strings move it to the db localization object. This db localization object isn't used for all queries because you only want to have to tweak the minimum amount of queries across dbs --Aarons
Discussions of BSDDB in Python
BSDDB has had a hard time in Python. Python Developers have been wrestling with trying to keep it stable. Guido finally decided to separate BSDDB from the standard Python Distribution. See discussions:
PEP 3108 marks BSDDB to be removed: http://www.python.org/dev/peps/pep-3108/ Development is not death however, it will only be out of sync of the python cycle. The home of pybsdb offering the bsddb3 package is hereL http://www.jcea.es/programacion/pybsddb.htm
A sqlite shelve interface for Python: http://bugs.python.org/issue3783
From http://www.sqlite.org/cvstrac/wiki?p=SqliteCompetitors: SQLite versus Berkeley DB:
Berkeley DB (BDB) is just the data storage layer - it does not support SQL or schemas. In spite of this, BDB is twice the size of SQLite. A comparison between BDB and SQLite is similar to a comparison between assembly language and a dynamic language like Python or Tcl. BDB is probably much faster if you code it carefully. But it is much more difficult to use and considerably less flexible. On the other hand BDB has very fine grained locking (although it's not very well documented), while SQLite currently has only database-level locking. -- fine grain locking is important for enterprise database engines, but much less so for embedded databases. In SQLite, a writer gets a lock, does an update, and releases the lock all in a few milliseconds. Other readers have to wait a few milliseconds to access the database, but is that really ever a serious problem?
Comparing from BSDDB to SQLite
A company that justifies a switch from BSDDB to SQLite; see http://www.tribler.org/DatabaseMigration:
Oracle's description of BSDDB; see http://www.oracle.com/database/docs/Berkeley-DB-v-Relational.pdf. Excerpt:
Berkeley DB Offers APIs, not Query Languages Berkeley DB was designed for software developers, by software developers. Relational database systems generally provide SQL access to the data that they manage, and usually offer some SQL abstraction, like ODBC or JDBC, for use in applications.
What BSDDB is not:
From previous GRAMPS discussions:
From the GRAMPS archives:
> Now, sometimes we get a request for a major architectural change that we > will accept. A good example is the new database backend for the upcoming > GRAMPS 2.0. The request came in to support a real database backend so we > could support larger databases. We analyzed the request, and felt that > it matched the goals of the project and would provide a significant step > forward in the usability of the program. The result was a major redesign > effort that will soon be released. I think I and few others are the ones that impacted this decision. Having an 850,000 person database tends to be deadly to the XML architecture that we were with. I've been the main person to test the integrity of the system with my Gedcom file importing. When I found that I couldn't import my file without extensive data loss, I came to Don and Alex and we all sought for solutions. We found that the XML interface was taking huge amounts of memory, and we looked for database backends that would handle the load. Don and Alex came through with the BSDDB backend, and ever since 1.1.3, I've been happy as a clam with the Gramps project, because I'm one step closer to killing Windows. I personally want to do away with it, but I need it for other applications. I've also come to the realization that both Windows and Linux are good, but in their own realms. I don't want this to become a huge flame war about Linux and Windows. so if you have other questions as to why I feel this way, email me. > So, would we accept a mySQL database backend? There is a good chance we > would (depending on the implementation), as long did not impact Aunt > Martha. We have even architected the backend to support this, since we > can see that higher end databases could provide additional functionality > such as versioning and multiuser support. We could accept mySQL because of this, but I agree with Don. If it negatively impacts the end user, why would we want to proceed with it? I have a friend that wondered about mySQL interaction, but he can see the impact that BSDDB has had on my database, and he has sided with me as well as the rest of the team. Not to say that this is not a possibility, but we need to remain focused on the tasks at hand. > So, in summary, the project is going in a direction that seems to meet > the needs of our users. If we changed directions, we might or might not > be able to reach a larger audience, but numbers are not our goal. We > fully support others submitting patches and other contributions, but > they will be weighed on how they match the goals of the project (and > most of the patches we've received to date do match the goals). If > someone wants us take the project in a different direction, we may or > may not be receptive depending if the direction matches our goals. > However, we will support your efforts if you decide to fork the project. > Who knows, maybe a remerge will occur in the future, or a forked project > will make us irrelevent. I agree with Don on this, numbers don't matter as long as the users are happy. Getting things appropriately nailed down and ready for the end user's use is what is paramount. After all, if there were no users, why would we even have a project with which to collaborate in the first place? We are here for the users, especially Aunt Martha, because of the fact that many people are just moving over to Linux and having something familiar to them, like a genealogical program is what matters to them. Making the transition to Linux is hard, don't get me wrong. But we are making it one step easier by not complicating the user's experience in their move. Like I said before, I'm just a bug finder. I'm not really a Python programmer, or anything, but I like to find bugs. Even if that's all I do on this project, I'm rather content. Everyone else that wants to port over to other toolkits and whatnot is free to do so. But also as an end user that's still a greenie to Linux in general, I can say that this program has helped my move over to Linux that much easier. Even if I have only contributed a little in the way of feedback (mostly from the end-user perspective). -Jason
Alex said: SQLite might be better or it might not, we haven't tried it. A great factor speaking for BSDDB is that it is supported by a standard Python module, bsddb. Don said: This is an important factor here - ease of setup and use. GRAMPS is difficult enough to get installed on some platforms (especially KDE-centric systems). Requiring someone to get an SQL database up and running to try out the program is probably too much effort. What I've discovered is that GRAMPS is one of the first programs that a lot of new users want to get running - usually before they have a lot of Linux experience. So we can't make the barriers to entry too high.
"Requiring someone to get an SQL database up and running to try out the program is probably too much effort." This simply isn't true of sqlite at all. The program would simply write to the db file. No server setup, no user accounts, no connection settings. Just a file name. users wouldn't even know. The embeded version of MySQL may be similar but I haven't tried it out. This might be true of MySQL though. However, I believe it's possible to use scripts and or code to manage launching and stoping the server. It might be possible to make it seamless for the user but would depend on the implementation. --AaronS
Create Object model
Going over src/gen/lib/, create an object model of how GRAMPS uses and manipulates genealogy data.
For this GEP to succeed it is extremely important that the experienced developers on the devel list agree with the object model
Select an SQL framework
- finish research and pick a database abstraction layer.
- finish research and pick a database.
- use the framework to set up a model of the database
- generate the tables
- create a dump of bsddb database in the sql database
- validate that all things present in bsddb are present in the sql database
- check validation rules. Eg, handle should be unique, rules must be added to ensure adding to the family table an object with handle like a person object is impossible on the database layer. These kind of rules can be done technically (a primary object table with key on handle) or with rules.
- best would be a framework that based on the model can generate an admin module to browse the database, see eg the admin module in django.
New db backend for GRAMPS
- write an implementation of src/gen/db/base.py to interface the DB abstraction layer with the rest of gramps. Gramps 3.x only has one implementation: src/gen/db/dbdir.py, but in branch22 a gedcom and a gramps xml implementation can be found (these have been deprecated).
- once written, this can be added as an experimental backend to GRAMPS
- Family Tree manager needs to list the family tree type (bsddb, sqlite), on creation of new family tree user must choose the backend.
- User can import .gramps/gedcom files just as this is done with bsddb backend once family tree is set up.
- it will be very important to use slots in src/gen/lib to make this work. Obtaining a person via get_person_from_handle, should only hit the person table. Only when the calling method needs attributes, should the attribute table be hit. This requires attributes that are not yet defined up to the moment they are accessed. It also means that the gen/lib objects for sql need to be aware of the database as it needs to know where to obtain these values... . This looks like a huge work to me, but definitely doable. Just rewriting gen/lib for an sql datamodel might be easier though, but that means rewriting the core of GRAMPS....
Once an sql backend is stable, base.py can be extended to offer extra functionality, or better optimize for SQL. Eg, in SQL one would have probably an attribute table. To know which persons have a specific attribute, SQL would select that from the attributes table, and then look up the people. In bsddb it means however to loop over all persons, and obtain the attribute sub table of a person and looking if attribute is present there.
Above clearly indicates that how one goes about in the two backends is very different. The bsddb way will work in sql though (as the get_person method works, and speed should be comparable to bsddb if above deferred obtaining of values via slots is implemented). Nevertheless, a clear mechanism to optimize for sql is needed. Continuing above example, see _HasAttributeBase.py
For sql, one would use the prepare method, obtain all people in a list, then return True or False if person is in this list. As db is passed, db can have a support_sql attribute, and code can be written depending on this setting. This does not look very ideal though.