Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT

PostgreSQL 8.3 Released 286

jadavis writes "The release of the long-awaited PostgreSQL version 8.3 has been announced. The new feature list includes HOT, which dramatically improves performance for databases with high update activity; asynchronous commit; built-in full text search; large database features such as synchronized scans and reduced storage overhead; built-in SQL/XML support; spread checkpoints; and many more (too many major new features to list here). See the release notes for full details."
This discussion has been archived. No new comments can be posted.

PostgreSQL 8.3 Released

Comments Filter:
  • by Anonymous Coward on Monday February 04, 2008 @04:10PM (#22297600)
    Would that be POSTGR~1.SQL?
  • I will probably wait for a while before I update but this looks great.
    HOT and the full text search are two features that I could use.
    Postgres is a good reliable database server. I just wish more projects supported it as an equal to MySQL.
    • Re:Nice. (Score:5, Funny)

      by Seumas ( 6865 ) on Monday February 04, 2008 @04:35PM (#22298096)
      8.3 had me at "full-text search".

      Now, please excuse me while Postgres 8.3 and I go take a little alone-time in a dark closet.
    • by ianare ( 1132971 )
      Agree. For an in-house project tracking system we decided to go with MySQL, not because it is better, as we found Postgres superior, but because of lack of support from various software components.
      • Re: (Score:3, Insightful)

        by tacocat ( 527354 )

        You have made two very serious flaws with your thinking about how and why you chose MySQL. In general it's rampant with PHB thinking.

        There is the obvious discussion you didn't have when you decided MySQL was better for your company. Why would you take something as important as your company database and leave it to a vendor to support? You have no in-house knowledge of your database. You have no back-up in the event that vendor gets into a contract dispute with you. And every time you need their suppor

        • Re: (Score:3, Interesting)

          by LWATCDR ( 28044 )
          The real advantage that MySQL has right now is the number of packages that are MySQL centric.
          You need a Wiki? Odd are that it supports MySQL and can be made to work with Postgres. Need a content management system? Odds are that it was written for MySQL and might work with Postgres. So you probably already have MySQL in house so people tend to go with MySQL even when Postgres is a better solution even if for no other reason than why maintain two databases.
          The only reason why we use Postgres for several of o
  • by Foofoobar ( 318279 ) on Monday February 04, 2008 @04:11PM (#22297636)
    The one thing that has stopped me from picking up Postgresql yet is that I can't do cross database joins on the same server. Should a user have the same permissions on two separate databases on the same server, a properly constructed query should be able to join across multiple DB's but they still don't implement this yet that I am aware of.
    • by geniusj ( 140174 ) on Monday February 04, 2008 @04:15PM (#22297704) Homepage
      You should consider using schemas as opposed to entirely separate databases.
      • Use dblink:
        http://www.postgresql.org/docs/faqs.FAQ.html#item4.17 [postgresql.org]

        Not as slick as doing it entirely in SQL, but it works.
    • by dfetter ( 2035 ) <david@fetter.org> on Monday February 04, 2008 @04:31PM (#22298020) Homepage Journal
      There are several ways to do cross-database JOINs in Postgres including dblink [postgresql.org], and even to other DBMSs via dblink-tds [pgfoundry.org] and DBI-Link [pgfoundry.org], but try schemas first, as another poster mentioned.
    • by poet ( 8021 )
      Look at plproxy or use schemas.
    • Are you thinking like you'd do in SQL server (IIRC) or MySQL, where you have a db reference in the table list (i.e., SELECT * from db1.table1, db2.table2 WHERE [join clause])?

      you'd probably just want to use a schema for that; the concept maps more or less the same way.
      • A schema is an unnecessary step though. Nothing is added to make that a needed step in this case (except in large scale implementations where it may an easier level of maintenance). It justs adds an extra layer of complexity and slows down what could be a far speedier query process and development process.

        Not that I'm doing a bunch of cross database joins but I like to separate out my databases for future scalability on the network and schemas answer 70% of the solutions but leave others in the lurch as


        • Ok, so first you say:

          A schema is an unnecessary step though. Nothing is added to make that a needed step in this case (except in large scale implementations where it may an easier level of maintenance).

          Which is fine, as far as it goes, I guess. If you don't need it, then it doesn't serve a purpose for you.

          Not that I'm doing a bunch of cross database joins but I like to separate out my databases for future scalability on the network

          See, but whereas before you were complaining about how schemas add an unnece

          • Not planning for scaling is a common failure of a bad developer. A good developer plans for scaling. Schemas arn't always a catchall solution. For instance, say I have a user table in one database but 16 other database have tables that have keys that reference that table. If I were to build schemas for EVERY function, I would have a SHITLOAD of schemas to manage. If I were to use ORM, I would have a bunch of bloated queries. Instead I can just build the queries as cross database joins and just manage the qu
            • Re: (Score:3, Informative)

              by ahodgson ( 74077 )
              A PostgreSQL schema is just a namespace qualifier; it functions just like MySQL's cross-database joins and is conceptually similar. It isn't a full copy of your database DDL.
  • Will it be used? (Score:4, Informative)

    by Anonymous Coward on Monday February 04, 2008 @04:13PM (#22297666)
    I'm a postgresql fan, I've considered it a superior database for years.

    However, it seems every client I come into contact with (I am a freelance software designer) seems to believe mysql is the only open source database available and certainly the best one for all jobs.

    Mysql is great (or at least, was great) for fast connection times and speed but for a stable, feature-rich excellent database, postgresql has always been ideal.

    It's just a shame no one seems to be aware of it.

    • Re: (Score:3, Informative)

      by cheater512 ( 783349 )
      People like myself who design software requiring a database usually prefer speed over features.
      Thats why MySQL is usually chosen.
      • by ByteSlicer ( 735276 ) on Monday February 04, 2008 @05:35PM (#22299052)
        That, and people are more likely to remember 'My' than 'Postgre'.
      • Re:Will it be used? (Score:4, Interesting)

        by jadavis ( 473492 ) on Monday February 04, 2008 @06:30PM (#22299872)
        People like myself who design software requiring a database usually prefer speed over features.

        Keep in mind that PostgreSQL may have more stable performance for a varied workload. That may mean fewer surprise slowdowns for you.

        I don't know your specific situation, but you may want to re-evaluate postgresql for your needs, especially if you care about performance -- PostgreSQL made leaps and bounds in this area in 8.3. I'm not sure what the last version you tried was, but 8.2 was a good performance boost as well.

        And if it still doesn't hold up to your expectations, please post your benchmarks to pgsql-performance, so that others can either help you meet the needs, or improve postgresql for the future.

        I would think also, as a developer, you might like the data integrity available in PostgreSQL that can help catch tough bugs early. Also, MySQL has many configurable options that may make your application not work and your customers unhappy (including table type -- changing table types from MyISAM to InnoDB or vice-versa may break applications). PostgreSQL's options (for the most part) don't affect the application.
        • Stability isnt critical for my applications.
          Raw speed is however. A decrease in speed would be rather bad.

          I'm talking about 600 queries per second averaged over the day.
          I've never bothered figuring out what the peak is. I'm scared what it could be. :)

          All I really use is basic SELECT, INSERT and UPDATE.
          Nothing fancy. Just straight basic SQL.

          MySQL fits these requirements perfectly.
          PostgreSQL not so much.
      • Re: (Score:3, Informative)

        by LurkerXXX ( 667952 )
        You people who design software should read up on this thing called data integrity, and enforcement of foreign key constraints.

        MySQL is pretty bad at those, but if you use an innodb table and try to use them, you find it's no faster than postgresql. And still missing many many features that postgresql gives you.
  • And then... (Score:3, Interesting)

    by n3tcat ( 664243 ) on Monday February 04, 2008 @04:18PM (#22297782)
    someone will make a comment regarding how sad the story of Postgres's popularity is, and how they've seen German folk music with more of a following.
  • by stoolpigeon ( 454276 ) * <bittercode@gmail> on Monday February 04, 2008 @04:27PM (#22297942) Homepage Journal
    this was a new feature for Oracle with 10g R2 also - and as a DBA I can only shake my head and ask "why?" Why would you want to drop the durability part of ACID? Why would you risk losing data for speed? There are so many ways to tune things and speed things up without taking such drastic measures. I know I'd fight tooth & nail before I'd turn this on in anything I managed. I just hate to think that someone with less understanding is going to think of it as a 'go-faster' button and then blame postgres when they lose something important.
    • by Wesley Felter ( 138342 ) <wesley@felter.org> on Monday February 04, 2008 @04:32PM (#22298028) Homepage
      Haven't you heard? In Web 2.0, data integrity doesn't matter.
    • by nuzak ( 959558 ) on Monday February 04, 2008 @04:40PM (#22298186) Journal
      > Why would you want to drop the durability part of ACID?

      SQL already allows you to drop to READ_UNCOMMITTED if you really really want to -- though the DB actually under no obligation to drop to that level, you're just specifying that you don't care. That removes A, C, and I all at once. Why not make the D part optional too?

      Not all databases are commerce. My company processes several billions of rows a day of data, and if we accidentally lose some data, it just degrades effectiveness a little bit and means our statistics just have to interpolate a smidge. In fact, we deliberately drop a lot of it anyway.
      • by nuzak ( 959558 )
        I'm smoking the discount crack today, what with the underscore in READ UNCOMMITTED and saying it removes Atomicity. Atomicity isn't lost, though running with autocommit on pretty much does the same thing (okay, not technically, but effectively). Seriously, a DB has to make ACID available, and sensibly speaking, the default. It doesn't mean that the user can't override it if they explicitly say that they don't care.
      • by RelliK ( 4466 ) on Monday February 04, 2008 @04:56PM (#22298418)

        SQL already allows you to drop to READ_UNCOMMITTED if you really really want to -- though the DB actually under no obligation to drop to that level, you're just specifying that you don't care. That removes A, C, and I all at once. Why not make the D part optional too?
        False. SQL standard explicitly specifies that writing to the database under READ UNCOMMITTED isolation is not allowed. You can only do read-only queries. Further, PostgreSQL doesn't even support READ UNCOMMITTED. There is no need for it. PostgreSQL implements MVCC [wikipedia.org] such that each transaction gets a private snapshot of the database. With that you get READ COMMITTED for free.

        I'm with the original poster here. Asynchronous transactions seem like a bad idea. But then it's not PostgreSQL's responsibility to enforce good software design. And maybe in some corner cases people can find use for them.

    • Re: (Score:3, Insightful)

      by Simon (S2) ( 600188 )
      Because sometimes you don't really care if the data will be there after the commit or not, you just need to do it fast. For example say you have a sensor that counts how many nails go in a package. You have to fill the package with 1000 nails, but it is not really important if there are 999 or 1001 nails in the package, the important thing is that the counter goes fast, say 1000 counts in 1/100 of a second.
      It's not a feature you will use in your web or c/s app, but it has it's uses, and it's good to have it
      • Re: (Score:3, Interesting)

        by stoolpigeon ( 454276 ) *
        I have a limited frame of reference - my experience has primarily been in the support of mission critical business processes - where data loss is the end of one's job. And from the replies I guess I can see that circumstances exist where this might be desirable, though part of me wonders, if in such cases that a database is the right tool.
         
        My other concern still stands - I hope the documentation makes the ramifications of choosing this option clear.
        • by Simon (S2) ( 600188 ) on Monday February 04, 2008 @05:02PM (#22298532) Homepage
          Sure. Like so many times in software development "only you can decide whether data loss is acceptable to you. But there are many classes of applications, including high-speed data ingest programs whose only goal is to get the stream of data into the database, where commit-but-don't-wait is not only acceptable but very desirable performancewise." (Tom Kyte)
        • by LWATCDR ( 28044 )
          Databases can be a good tool for this because of ease of extracting the data with standard reporting tools. You are correct that many times databases are being used to replace flat files. While it is often not the optimum choice it often the right choice. Modern databases are fast enough and thanks to FOSS cheap enough that it is just easier to use a database you know then write your own file handling code. It also means that if you expand the application in the future that the data is already in a database
      • Re: (Score:3, Insightful)

        by Ruzty ( 46204 )
        Such a count does not require a DB write per transaction ( nail++ ). Such minor amounts of data are better left memory resident if they are wiped after a quantifier is reached. DB writes are for the purpose of keeping state. In your example the only reason to keep state is should the machine fail and the boxes be partially filled the remaining count necessary to complete the box needs to be known. That is better done with a physical weight measurement than a DB query.

        Asynch writes are useful for keeping
    • by Anonymous Coward

      Asynchronous commit is very useful in applications where the thing that's important about the data is its statistical distribution, and not the individual data points per se.

    • In the line of work I'm usually in, data warehousing, we have large data loading jobs that want to load say 100M rows of data.
      For one thing, you usually can't do that in one transaction. To get over that problem you slam commits in between.
      You basically don't care if the commits work immediately or not. What you care about is whether or not the 100M rows of data end up in the target table.

      As such, the question then becomes: when am I going to be certain that the data is written to disk? Obviously not whe
    • Re: (Score:3, Informative)

      by ivoras ( 455934 )
      Are you sure this is such a disaster? As far as I can tell this only means that executing "COMMIT" doesn't block (wait) until the commit has actually happened but returns immediately, and the actual operation is performed "later". The data still goes through the journal (WAL), is still fsynced when needed, etc.
    • Why would you want to drop the durability part of ACID? Why would you risk losing data for speed?

      We run an hour job to copy legacy FoxPro data to PostgreSQL [honeypot.net]. It's squirreled away in its own schema, and should that schema get totally destroyed, it only takes about 20 minutes to do a full rebuild.

      I would happily trade integrity for speed on that schema, and anything that gives me that option is welcome.

    • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday February 04, 2008 @05:46PM (#22299212) Homepage

      Why would you risk losing data for speed? There are so many ways to tune things and speed things up without taking such drastic measures.


      The new async commit feature bypasses the requirement that records physically hit disk in order to complete a commit. If you must wait for a disk commit (typically enforced by fsync), the maximum number of true commits any one client can do is limited by the rotation speed of the hard drive; typically an average of around 100/second for a standard 7200RPM disk with PostgreSQL. There is no way whatsoever to "tune things and speed things up" here; that's how fast the disk spins, that's how fast you get a physical commit, period.

      In order to accelerate this right now one needs to purchase a disk controller with a good battery-backed disk controller and pray it always works. If it doesn't, your database might be corrupted. With async commit, you can adjust the commit rate to something your disks can keep up with (say 50/second) just with this software feature while still allowing a write rate much higher than that, and at no point is database corruption possible (from this cause anyway). This makes people who want to use PostgreSQL in things like shared hosting environments have an option that allows heavy writes even for a single client while having a reasonable data integrity policy--only server crashes should ever lose you that brief period since your last true commit. That's a fair trade for some applications (think web message boards for example) and lets PostgreSQL be more competitive against MySQL based solutions in those areas.
    • Re: (Score:3, Funny)

      by plopez ( 54068 )
      Why would you want to drop the durability part of ACID? Why would you risk losing data for speed?

      SO you can write sloppy inefficient code and do sloppy DBA work and get away with it?

      Just an idea.
  • long live postgres (Score:4, Insightful)

    by squoozer ( 730327 ) on Monday February 04, 2008 @04:37PM (#22298132)
    As far as databases go I don't think it gets much better than Postgres and I reckon over the years I've tried most popular databases. What I really don't understand though is why Postgres doesn't own more of the database market. Sure it was a bit slower than MySQL a few years ago but the benifits that you reaped for that slightly slower speed far outweighed the cost. The difference know is, I would say, much small and less significant.

    I can only assume that MySQL keeps it's large market share because it has commercial backing and therefore good support. I'm sure there are plenty of products taht don't require that level of support though.
    • Re: (Score:3, Interesting)

      by costing ( 748101 )

      I can only assume that MySQL keeps it's large market share because it has commercial backing and therefore good support.

      No, it's because people are used to LAMP, and tons of easy-to-install apps only have MySQL support. But there is hope, I see more and more PHP apps allowing you to choose PostgreSQL instead. I think this is the turning point, once they reach the critical mass needed to turn the developers' heads it will become THE open source database. And for a good reason, it beats MySQL in every way you imagine, including the obvious features and not so obvious performance. Well, maybe for two queries in a 10 rows table

      • Re: (Score:3, Insightful)

        by Lennie ( 16154 )
        The problem with the whole LAMP-acronym is, we have a lot more to choose from. What do you think of: LLPP (Linux, lighttpd, Perl, PostgreSQL) or something like that.
        • Re: (Score:3, Funny)

          by David_W ( 35680 )

          What do you think of: LLPP (Linux, lighttpd, Perl, PostgreSQL) or something like that.

          My personal favorite is FAPP (FreeBSD, Apache, PostgreSQL, Perl).

          ...

          What's so funny?

    • What I really don't understand though is why Postgres doesn't own more of the database market.

      It's the market leader advantage combined with the network advantage. MySQL had it, and Postgres has been playing catch-up since. The obverse case of the "long tail" they talk about.
    • by Ed Avis ( 5917 )

      What I really don't understand though is why Postgres doesn't own more of the database market.
      Worse is better [jwz.org].
    • Nah. I dont use it because I still cant do a cross database join. That and features like SQL_CALC_FOUND_ROWS and query caching which make it EXTREMELY useful.
    • I have used PostgreSQL as my primary db since 2000 (version 6.5!) and I have watched it for a while.

      PostgreSQL had a number of problems in the past which made it hard to work with including:
      1) No ALTER TABLE DROP COLUMN support and other things needed for prototyping (fixed in 7.2 iirc)
      2) Issues with dependency tracking in text SQL dumps (fixed in 8.0) meaning that some times one had to hack dumps to get them to restore properly.
      3) maintenance tasks required exclusive locks on tables (corrected sometime in
    • Re: (Score:2, Insightful)

      Replication. None of the replication options for Postrgres are particularly pleasant, especially when compared to the support that's built into MySQL.
    • by mortonda ( 5175 )

      What I really don't understand though is why Postgres doesn't own more of the database market.

      This may sound silly, but for me the biggest hangup has been getting a database started - the default install iirc uses unix users to authenticate into their own databases, whereas mysql has its own internal user database - mysql I can set set up pretty quickly, but postgresql I always have to hunt for the right config file to set up a user that I can connect from the network with. I know, it would probably get easier if I used it more.

      But that brings me to the second problem - While the documentation fo

      • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday February 04, 2008 @08:23PM (#22301248) Homepage
        This is one of those things that's painful only until you've crossed a certain threshold of information, then it never bothers you again. It's better now but still harder to put the pieces together than it should be.

        Start with the documentation on creating a cluster: http://www.postgresql.org/docs/current/static/creating-cluster.html [postgresql.org] In 8.3 the default of using auth you mentioned has been removed, for the reasons you described. So it now runs as unsecured for local users by default and you have to worry about this yourself, which since it reduces the frustration at getting started was deemed an improvement.

        That page suggests some options you can pass to initdb to adjust the default security level. Now you want to look at the initdb docs: http://www.postgresql.org/docs/current/static/app-initdb.html [postgresql.org] See that if you use -W/--pwprompt (same thing) you can assign a password to the database superuser at cluster creation time. If you do that, you can now change the default authentication scheme to password-based (-A md5 passed to initdb will do that), you'll be secure, and you'll have one user you can login as (postgres) to create more.

        To see what other authentication methods are available and to learn what your options are look at http://www.postgresql.org/docs/current/static/client-authentication.html [postgresql.org] The one you really need to dive into is pg_hba.conf which is the magic text file to edit here. A new one of those will be put in the base directory of your new database cluster. Open that file up, look at the documentation, and you'll need to add a line to add network support like one those in the examples. Probably something like

        host postgres all 192.168.12.0/24 md5

        (allow access to anybody on the 192.168.12 subnet access the database with a password)

        That should get you past the ugly initial hurdles. The next document you may need is how to add more users: http://www.postgresql.org/docs/current/static/sql-createrole.html [postgresql.org]

        again look at the examples first and then backtrack to the parameters, will make more sense that way. After that you'll want to create more databases with createdb: http://www.postgresql.org/docs/current/static/app-createdb.html [postgresql.org]

        And then you should be able to find your away around from there using the psql command line tool.

        Note that once you get past accepting connections over the network, you could use a tool like pgAdmin III to handle the rest of this work using a slicker interface. There's even a copy of it bundled with the Windows installer you can use on such a client to administer a remote server running a real OS. It's of course possible to install pgAdmin manually on other platforms as well, see http://www.pgadmin.org/ [pgadmin.org] for other verions (note that binary packages for platforms like RPM don't show up in every release, you have to go back to v1.8.0 to get the last full set of packages).
        • Re: (Score:3, Interesting)

          "This is one of those things that's painful only until you've crossed a certain threshold of information, then it never bothers you again."

          For the most part then it's too late.

          There are *a lot* of people that started their days on DBs by their own or as another issue clogged to their back. Then they go opensource just because it's easier to start with (no provisions, not a dozen calls to a provider, just an "apt-get install" and you are on your way). Then you look for Postgres; it has a fame of a serious,
  • by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Monday February 04, 2008 @04:40PM (#22298180) Homepage Journal
    PostgreSQL 8.3 is nicely timed. I've been looking forward to trying it in a setting which wouldn't allow the use of betas. Now I've got the on-topic stuff out the way, onto my main point.

    There are so many Open Source databases (MySQL, MaxDB the last GPL version, Firebird, Postgresql, Ingres Community Edition, hsqldb and H2) that it is hard to know which ones implement what, which ones are useful for what, or which ones are optimal for what. Simple benchmarks (a) rarely are, and (b) usually want to promote one specific product over another. There are standardized tests, for a small fortune and by a relatively closed group who probably don't have a fraction of the range of experiences of databases in the real world, so cannot possibly be trusted to authenticate a standard or measure that could be used to compare databases.

    We could really do with some serious side-by-side evaluations of these database engines, or at least decide what such evaluations would need to be to be actually useful. (Hey, maybe CmdrTaco can add a comparison section, get it sponsored by Which? or some other consumer guide, and have some of us run evaluations. It'd have to be done well to not be flamebait, which I think might rule me out, but if it could be done, it would be hellishly useful.)

  • Upgrade Procedure (Score:3, Interesting)

    by AltImage ( 626465 ) on Monday February 04, 2008 @05:53PM (#22299326) Homepage
    "A dump/restore using pg_dump is required for those wishing to migrate data from any previous release"

    Postgres have a habit of making you do this to upgrade and it really sucks. I understand the reasons behind it, but that does not reduce the amount of suck, especially for a large database.
    • by ashridah ( 72567 )
      Most of the admins I know that deal with large PostgreSQL databases use slony-I to do their migrations.

      Of course, if you're tight on hardware/disk resources, you're probably in trouble, no matter what, but slony can get you the migration/sync period down to a minimum of "shutdown, change setting, restart", if done correctly.
      Don't forget that PostgreSQL can easily be run side-by-side with another version of PostgreSQL on the same box, so long as they aren't sharing a data tree (duh) or network ports. This mi
      • Judging from history, it will be two years or more before slony is compatible with PostgreSQL 8.3. And the intrusive schema changes required of slony are unacceptable to most every DBA I've asked about it.
        • by ashridah ( 72567 )
          Intrusive? Eh?

          Last time I set up slony-I myself, it required a _replication schema. That's hardly intrusive, it's a *separate schema*.

          The main difficulty is that it then tends to attach itself via a number of REFERENCES. Of course, since we're not actually discussing long-term usage, but short-term usage for purposes of migration, this is hardly a major problem. You add in slony's replication details, trigger replication, wait until replication has succeeded, promote the slave and shut down the master, test
        • Slony is already compatible with 8.3; the current maintainer (Chris Browne) is very on top of things. There are still some rumblings about problems with the dropping of some implicit casts to text in 8.3, but they have been squashing those as reported for months now already and they may already be completely gone. Since anybody with a database big enough that they need Slony to handle version upgrades is surely going to test heavily issues there should easily be caught during that.

          Intrusive schema changes
  • Postgres Books? (Score:2, Interesting)

    Does anyone know of a good, semi-recently written book on PostgreSQL? Everything I find is from at least 3 years ago. Is it that PostgreSQL hasn't changed much, barring this release, in the past few years?
  • Just recently, I discovered that Ferret had synchronization problems when I deployed my my http://cookingspace.com/ [cookingspace.com] site using nginx and a mongrel cluster - a little nuisance to work around. I did some fast experimenting with indexing and search using MySQL and PostgreSQL, and I made a note to retry PostgeSQL when version 8.3 was released.

    When a deployment platform has inherent weaknesses (like Rails!), it is important to be able to shove off as much processing as possible to more industrial strength tools
  • by G3ckoG33k ( 647276 ) on Tuesday February 05, 2008 @05:32AM (#22304780)
    Hi, I read that "MySQL does not uses several CPUs to execute single query - only multiple connections may benefit from several CPUs.". That was written January 6 2004 by Peter Zaitsev, then a full-time developer at MySQL AB, www.mysql.com. I found the quote at http://lists.mysql.com/benchmarks/45 [mysql.com]

    Does anyone know if PostgreSQL supports a dual or quad CPU when it comes to executing a single query, or if MySQL now supports it?

    The reason I ask is that I have a database with tens of millions of records and even 'simple' queries take a long time. Would it be beneficial to buy a 8 core machine, i.e. dual quad, over a single quad cpu?

    Thanks for any tips or links!
    • Re: (Score:3, Informative)

      by Just Some Guy ( 3352 )

      It would seem not to [postgresql.org]. Yeah, I wish it had that, too. The other posters who keep telling you to get faster IO miss the idea of having extra CPUs handling locking, cache management, etc. so that even single running queries are faster.

  • Just one issue (Score:3, Interesting)

    by bytesex ( 112972 ) on Tuesday February 05, 2008 @09:12AM (#22306008) Homepage
    I'm a great fan of postgres but I ran into an irritating limitation recently; I replicate a database over a large number of very small nodes using slony. I really don't care about the integrity of the slaves - they're read-only to their clients and should I suspect they're corrupt I just reboot (they live in memory and the OS lives on a 1 Gb read-only flash drive). But postgres insists on having a WAL directory (pg_xlog) with chunks of 16MB in it. And that's big if you live in 128MB of ramdisk, and you can't turn that off. I mean, from my reasoning - the WAL isn't really used unless you do recovery; the versions of the data are in the db itself (otherwise we wouldn't need vacuum, now would we ?) So why can't I just configure postgres to not use WAL ? And then if the db is corrupt we just die. No, say the guys on IRC, you just have to recompile it with its hard-defined value of 16MB down to something lower. Yeah right. I'm not interested in hacks - I want a versatile RDBMS.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...