- Modular Architecture
I know that many people have put this on their wish list and I am sure that many people would probably blame me for some of the current architecture woes. Honestly, I have an excuse: I am trying to evolve the plug-in system into something more usable. As Eric Herman can testify, we actually had an alternative plug-in architecture under development as a small skunk works project, which in my opinion was superior than what we have today but we permanently shelved it after the current plug-in system appeared in the tree.
A good modular architecture would allow parts to be plugged together to make a whole and would allow infinite extensibility and predicable degradation of services should modules be removed. The removal of the ACL module would predicatably remove all ACL checking. Multiple ACL modules would extend capabilities and make it more fine grained as their capabilities merge. In another senerio, we could have a module which acts as a filter to another module - they combine at run time, for example: A filter module could combine with a storage engine module to compress or encrypt blobs or specific data columns - suppose an environment which needs to store credit card details or other personal identifiers which require the persisted data encrypted transparently. There should not be any limits imposed upon the quantity and combination of modules to create plug-in classes except that obviously (and detectably) bogus combinations should be emit errors safely and predictably.
Example of syntax:
CREATE LIBRARY myutillib EXTERNAL NAME 'myutils.so';
CREATE LIBRARY mystorelib EXTERNAL NAME 'mystore.so';
CREATE STORAGE ENGINE mystore1 USING LIBRARY mystorelib EXTERNAL NAME 'mystore_engine';
CREATE STORAGE ENGINE bar USING CHAIN OF
LIBRARY myutillib EXTERNAL NAME 'bzipfield',
LIBRARY myutillib EXTERNAL NAME 'cryptfield',
LIBRARY mystorelib EXTERNAL NAME 'mystore_engine';
-- create a table, the column `secretblob` is compressed with bzip
-- before being encrypted with AES using a shared secret (stored by mysql table def)
-- the ssn field is encrypted with triple-DES using a secret stored in a varable secretvar
CREATE TABLE foouser (
libmysysas a separate project
This is an old idea but one which I think holds significant value and would ease maintenance of the server. I believe we would be served well to have
libmysysand other non-core common components as a separate project outside of the mysqld server code. Then all the different versions of mysqld which we develop can use a common foundation and abstraction layer. The abstraction layer should not care for what version of mysqld is being built on top of it. Both shared and static libraries, debug and non-debug builds could be made available with the simple caveat that a debug build of mysqld should be built on a debug build of
libmysys. However, I think that a non-debug build of mysqld should be made to work on top of a debug build of
libmysys. As to what components should be moved into the
libmysysproject? Basically, the contents of the
regex/directories and their relevant headers from
include/should be transplanted.
- New/modular Parser
There are simple things which can be done to make the parser better and more customizable. As part of this, multiple parsers should be supported. "Why multiple parsers?" readers may ask.... Simply because only a limited subset of the possible accepted syntax are used the overwhelming majority of the time. I would imagine that the parser could be split up to be
- Data access and manipulation -
- Privilege manipulation -
- Schema manipulation -
- Stored procedure - invoked by
- Data access and manipulation -
- Unit tests for server components
I like unit tests. When done correctly, they should test components for proper operation within specifications and for graceful failure outside of specification. Many significant parts of the server should be possible to unit-test. Full integration testing (á la
mysql-test-run) should not be used to substitute for the lack of unit testing. Components which in my opinion should be subject to unit testing include each individual storage engine (within some form of storage engine test harness), the parser (test that it generates the correct trees), the ACL subsystem, the optimizer, each Item class and spatial functions, logging and replication, etc... and lets not forget unit tests for the whole of the aforementioned
- Aggregate Stored Functions and External Stored Procedures
This is a small detail but it would be nice to have aggregates it completes the stored procedure implementation so that any old-world UDF can now be defined under the newer stored procedure code. This should not be too complicated to do, the complex part is how to declare the aggregate function in a way which makes sense (since the SQL2003:PSM spec does not encompass declaring aggregate functions). It would also be nice to have an uniform extensible external stored procedure implementation. Eric Herman and I actually have a skunk works project for this - the basic framework already works. When we have time, which is unfortunately not very often nowadays. We're trying to develop the plug-ins components for handling Java and Perl external stored procedures. This skunk works project is the descendant of the earlier 'plug-in' skunk work project we had, except with all the plug-in management stuff and elegant unit-test suite removed. The SQL syntax conforms to the SQL2003 specification, which is similarly closely adhered to by IBM's DB/2
Yeah I know this is supposed to be just five items but there are a few others which I believe needs to be considered:
- A new Recursive Descent parser
Yeah yeah yeah... broken record... etc. The SQL syntax lends itself for parsing by recursive descent much more easily than using Yacc style parsers. Wether it be hand-written or use some tool similar to PCCTS is not too important. If a tool such as PCCTS is used, the generated code could then be maintained by hand if desired, using the tool only as an initial 'bootstrap'.
- Abstract Syntax Trees
Wouldn't it be nice if the parser was to only generate abstract trees which could then be processed later more optimally? The parser would simply parse the keywords, turning the identifiers into atoms which could later be compared with a simple integer compare operation - no need for laborious string comparisons. SELECT statements could be handled more efficiently as the joins can be evaluated first before examining the expressions to emit which would eliminate the extra pass which is done to resolve field names to their tables. It would mean that column types would be known at the time of parsing the expressions which would allow, perhaps, the overloading of functions and more.
I believe that the extra AST stage would incur less cost than the current cost of all the string comparisons currently done resolving identifiers which is O(n*m/2).
- SCTP and/or link aggregation
It would be good/useful to be able to bundle multiple connections down a single network link. This would reduce the number of concurrent sockets which mysqld handles when communicating with (for example) a J2EE application server. SCTP performs the multiplexing within the operating system's network stack, although I would imagine that it would eventually be optimized to be handled within user-land.
- Parsing within the client
The statements could be parsed within the client library and an optimal bytestream which represents the AST could be transmitted instead. This lowers the network overhead and will lift some burden of parsing from the server. It will also allow the client to rapidly receive error messages should there be some error in the syntax. This will not bypass any security as the AST tree should only be a binary representation of the SQL statement and should be completely reversible, except that stuff such as comments would be lost. On the plus side, the query-cache's job would be easier.
- Integrated federation
Best illustrated by example:
CREATE FEDERATION odbc USING LIBRARY 'ha_odbc.so' EXTERNAL NAME 'odbc_engine';
CREATE FEDERATION google USING LIBRARY 'ha_google.so' EXTERNAL NAME 'google_engine';
CREATE CONNECTION fooserver USING "mysql://fooserver/db";
CREATE CONNECTION barserver USING "odbc://otheroradsn";
CREATE CONNECTION google USING "google:///";
-- select against the bar table on fooserver
SELECT foo.*, bar.* FROM foo@fooserver JOIN bar@barserver USING (id);
SELECT * FROM `site:mysql.com +engine +archive`@google;
- Elimination of
I think Jay Pipes and Ronald Bradford have already heard this idea from me. Each 'schema' database directory should have a simple text file called '
schema.sql' and within it, it contains a list of
CREATE TABLEstatements for each table within that directory. When someone performs an
DROPoperation, the statement is initially simply appended to the text file. At some later flush or shutdown, a new file is created called '
schema.new' which is simply the current state of play for the directory which only has
CREATE VIEWand similar declarations within it. When it is finished creating the new file, it is renamed as the new schema file.
Why is this good? It means that all the table declarations are there in a nice human understandable format. During run time, it is trivial to simply append to the file. It will be quick to parse when the schema is first accessed. When the schema is garbage-collected, perhaps by ref count, it is trivial to flush out all the declarations. It will improve performance of mysqld where there are many tables as all the table definitions for a schema are loaded at once. There are plenty of benefits. And in the case of failure/crash, it is quite recoverable: If the table definition after the
ALTERdoesn't match the table, the table definition before it may be used - in the case where an incomplete
ALTERoperation has occurred.
- Elimination of
It would be much nicer if all the error messages for all supported languages was stored within a system table within mysql. This would allow different connections to have different languages for their error messages and would even allow the creation of a special localization for machine parsing - for use in cases where the error message needs to be processed further for use in the client. This may be useful to throw exceptions with specific properties, parsed from the error message, or for use with Federation where the federated server needs to parse the error message to make a sane local error message. Also, I would extend the syntax used by the error message templates such that positional modifiers can be used, similar to Java localization. This would be useful because some error message may be strained/contrived to fit the same argument order for all languages. A column can also be used for additional descriptive and help information so that a user may ask:
HELP ERROR 1234
Woah, that ended up being perhaps my longest blog posting.... It's amazing what you end up typing and copy/pasting while waiting for repositories to clone.