MySQL does not support entire federated databases, but it does support federated tables – as long as the option is enabled in my.ini or my.cnf.
To create an entire federated DB, in a non manual, and easy to maintain way (refresh table definition if source table structure changes), we can use the following procedure.
First, the caveats:
1. Federated tables do not use indices in the usual way, so you may need to create a table locally and index before expecting any performance.
2. When using a client such as Heidi, that does show table status, the connection between your DB and the source DB of the fed table remains open. An issue can arise if the number of tables is significant, while max connections
on source db is low. To get around this, periodically use flush tables
on the destination DB to close the connections. There are alternatives to this, but this is the simplest workaround (may not be the best).
Personally, to get around this issue I scheduled a MySQL event that flushes every federated table (not FLUSH TABLES
, but FLUSH TABLE TABLENAME;FLUSH TABLE TABLENAME2
), to avoid blocking other queries. FLUSH TABLES flushes all tables, so it has to wait for other queries to finish, and any new queries have to wait for it to finish, causing no possibility of concurrent queries.
1. Get a nodata mysqldump in the same way as you get a normal dump, but with -d
switch for no data
1 |
mysqldump -d -P3306 -h 1.1.1.1 -u username -pPASS dbname > mydump.sql |
2. Convert the engine part of the tables into federated+connection (here done with Python)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# (.*?) is a capturing group, to allow the contents between that pattern to be reused later # ´.*?´ which is the non greedy version of ´.*´ # ´\1´ is a backreference for the first capturing group for each occurrence, which will be for example "tablename1", "tablename2" and so on # the patterns as raw string with the prefix ´r´ (which is a good habit to take) # the re.DOTALL flag which allows "." to match newlines import re def convertdump(input, output, conn): fin = open(input,'r') filedata = fin.read() fin.close() pattern = r"CREATE TABLE `(.*?)`(.*?)ENGINE=.*?;" replace_pattern = r"CREATE TABLE `\1`\2ENGINE=FEDERATED DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CONNECTION='"+conn+r"\1';" newdata = re.sub(pattern, replace_pattern, filedata, flags=re.DOTALL) fout = open(output,'w') fout.write(newdata) fout.close() if __name__ == '__main__': # only run this part when the file is run, and not when file is imported. convertdump('mydump.sql', 'fed_dump.sql','mysql://user:pass@ip:port/db/' ) |
3. Load up your new dump
1 |
mysql --host=127.0.0.1 --port=3306 --user=username--password=pass --database=fed_db_name < "fed_dump.sql" |
You can run this periodically or on demand to refresh the table definitions if the source definitions change