Sounds awesome right? Can be, not always. There are pros and cons to it. Sometimes a denormalization structure is best. I don't think anyone will agree that normalization is the answer 100%. While there are more points to the below, I'll only cover a few basic ones.
Normalization
Pros:
- Updating tables can be faster (fewer fields)
- Can be less risk of database locking
- More flexibility to expand functionality of solution
- Removes or reduces redundancy
- Increase data retrieval times for single tasks (if designed correctly)
- Enforce relationships between data in joined tables (referential integrity)
- Can split up a table with many fields that will be quite large into multiple database servers
- Indexing strategies not as effective as data is spread out across multiple tables
- If a developer doesn't know what they are doing (or requirements are incorrectly identified), it can create a mess very quickly
Pros:
- don't have to worry about joins in a denormalized structure.
- reporting and integration can be an easier task than white boarding out a network of tables.
- Indexing can be more effective
- Easier to maintain and understand schema
- Good approach to consider when CPU is an issue
- Updating tables might be slower
- Updating development on one large table with many fields
- Retrieving records may be slower
The ultimate answer is to know both normalized and denormalized DB strategies. Through thorough planning of a solution, you can determine the best approach. As for asking about database normalization, I think the best initial questions to ask up front are:
a) are the developers of the solution competent
b) did they take normalization into account when designing the solution
For more information, I strongly encourage you to do some more research into these concepts. It can only help!
http://daxdude.blogspot.com/2013/10/what-is-database-normalization-goodbad.html#!/2013/10/what-is-database-normalization-goodbad.html
没有评论:
发表评论