Monday, March 26, 2012
Help on Performance with growing database
I'm using SQL 2000. I have a simple database as backend for warehouse to
scan the products using access as front end. It includes 2 tables: order and
product.
Since everyday we have about 3000 orders to come in, each order has a few
products. The database is growing fast.
I can see the scanning process is a bit of slow at the front end. So I kept
current year data in both tables, and moved older data (3 years) into
historical tables within the same database.
Now, the active tables are smaller, do I need to do trunk database since I
deleted records from the active tables? Would this make it better? I don't
know if I can do trunk one table instead of the whole database?
If anyone has better idea regarding this case, it will be welcomed.
Thanks
SarahI think you did the right thing by archiving the old data to historical
tables. Partitioning the data like that will decrease I/O.
You could go a step further and place the data on a separate set of physical
disks if you need to access the archive data concurrently with the "active"
data.
I couldn't tell from your description which table the scanning (I'm assuming
you were talking about a "scanner" and not referring to table/index scans)
was accessing. But either way it appears that you're referring to a read
operation. In this case you may want to investigate your indexes to see if
you can find a better index for the query. Use Query Analyzer and look at
the execution plan on the query and verify that it is using the index you
are expecting. Also, you can call SET IO_STATISTICS ON before the query runs
and check the output to look at scans and logical reads. This will help you
determine if your optimization efforts are improving things or making it
worse.
Mark
"Sarah" <sguo@.coopervision.com> wrote in message
news:uBPMh4$rGHA.4264@.TK2MSFTNGP04.phx.gbl...
> Hi,
> I'm using SQL 2000. I have a simple database as backend for warehouse to
> scan the products using access as front end. It includes 2 tables: order
> and product.
> Since everyday we have about 3000 orders to come in, each order has a few
> products. The database is growing fast.
> I can see the scanning process is a bit of slow at the front end. So I
> kept current year data in both tables, and moved older data (3 years) into
> historical tables within the same database.
> Now, the active tables are smaller, do I need to do trunk database since I
> deleted records from the active tables? Would this make it better? I don't
> know if I can do trunk one table instead of the whole database?
> If anyone has better idea regarding this case, it will be welcomed.
> Thanks
> Sarah
>|||Hi Mark,
Thanks for your response. Since our users don't inquiry historical data very
often, can I leave them on the same disk?
Also what I meant scanning is that user uses scanner to collect the barcode
into the system. The user does read and write on both tables.
I'm not sure if "Trunk database" make it better?
Also I found "tempdb" is growing than it should be, is that because I've
been doing queries in Query Analyzer on daily basis? I checked the
properties, allow auto grow at 10%. How can I tell the grow step for
"tempdb"?
Thanks a lot,
Sarah
"Mark Miller" <mark_n_0-s*p..am@.maxpreps.com> wrote in message
news:epNlypAsGHA.4608@.TK2MSFTNGP04.phx.gbl...
>I think you did the right thing by archiving the old data to historical
>tables. Partitioning the data like that will decrease I/O.
> You could go a step further and place the data on a separate set of
> physical disks if you need to access the archive data concurrently with
> the "active" data.
> I couldn't tell from your description which table the scanning (I'm
> assuming you were talking about a "scanner" and not referring to
> table/index scans) was accessing. But either way it appears that you're
> referring to a read operation. In this case you may want to investigate
> your indexes to see if you can find a better index for the query. Use
> Query Analyzer and look at the execution plan on the query and verify that
> it is using the index you are expecting. Also, you can call SET
> IO_STATISTICS ON before the query runs and check the output to look at
> scans and logical reads. This will help you determine if your optimization
> efforts are improving things or making it worse.
> Mark
>
> "Sarah" <sguo@.coopervision.com> wrote in message
> news:uBPMh4$rGHA.4264@.TK2MSFTNGP04.phx.gbl...
>|||Sarah wrote:
> Hi Mark,
> Thanks for your response. Since our users don't inquiry historical data ve
ry
> often, can I leave them on the same disk?
> Also what I meant scanning is that user uses scanner to collect the barcod
e
> into the system. The user does read and write on both tables.
> I'm not sure if "Trunk database" make it better?
> Also I found "tempdb" is growing than it should be, is that because I've
> been doing queries in Query Analyzer on daily basis? I checked the
> properties, allow auto grow at 10%. How can I tell the grow step for
> "tempdb"?
>
It sounds like you have poorly written queries running against the
database. With proper indexes and efficient coding, your tables should
be able to handle literally MILLIONS of rows. High tempdb utilization
indicates the use of temp tables or large ordering/grouping operations.
I would begin here:
http://www.sql-server-performance.c...mance_audit.asp
Use Profiler to identify the query that produces the highest number of
reads over an hour timespan, preferably a busy time of day. Determine
why that query produces the load that it does, and focus on optimizing
it. Repeat this process until your performance has improved.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Sarah,
Yes, you can leave the tables on the same disk. If by "Trunk database" you
want to shrink the database, no
that won't do anything other than reduce the current size of the db once
you've removed the historical data. So
unless disk space is an issue it won't do much for you. However, after
removing so many records from the table you might benefit from a either a
defrag or a rebuild on your indexes.
Tracy makes a valid point as well, you might want run profiler to look for
the offending query. My suggestions before we made under the assumption that
you already knew which query was slow based on the fact that you mentioned
the database only had 2 tables. I assumed that meant you probably didn't
have too many queries to weed through. But you should definately check the
query and verify the execution plan to make sure it is taking advantage of
your indexes. You want to see index seeks. If it is doing table scans check
your query. If you are seeing index scans it could still be the queries but
you'll want to start looking at the index and make sure they are optimal.
Mark
Help on Performance with growing database
I'm using SQL 2000. I have a simple database as backend for warehouse to
scan the products using access as front end. It includes 2 tables: order and
product.
Since everyday we have about 3000 orders to come in, each order has a few
products. The database is growing fast.
I can see the scanning process is a bit of slow at the front end. So I kept
current year data in both tables, and moved older data (3 years) into
historical tables within the same database.
Now, the active tables are smaller, do I need to do trunk database since I
deleted records from the active tables? Would this make it better? I don't
know if I can do trunk one table instead of the whole database?
If anyone has better idea regarding this case, it will be welcomed.
Thanks
SarahI think you did the right thing by archiving the old data to historical
tables. Partitioning the data like that will decrease I/O.
You could go a step further and place the data on a separate set of physical
disks if you need to access the archive data concurrently with the "active"
data.
I couldn't tell from your description which table the scanning (I'm assuming
you were talking about a "scanner" and not referring to table/index scans)
was accessing. But either way it appears that you're referring to a read
operation. In this case you may want to investigate your indexes to see if
you can find a better index for the query. Use Query Analyzer and look at
the execution plan on the query and verify that it is using the index you
are expecting. Also, you can call SET IO_STATISTICS ON before the query runs
and check the output to look at scans and logical reads. This will help you
determine if your optimization efforts are improving things or making it
worse.
Mark
"Sarah" <sguo@.coopervision.com> wrote in message
news:uBPMh4$rGHA.4264@.TK2MSFTNGP04.phx.gbl...
> Hi,
> I'm using SQL 2000. I have a simple database as backend for warehouse to
> scan the products using access as front end. It includes 2 tables: order
> and product.
> Since everyday we have about 3000 orders to come in, each order has a few
> products. The database is growing fast.
> I can see the scanning process is a bit of slow at the front end. So I
> kept current year data in both tables, and moved older data (3 years) into
> historical tables within the same database.
> Now, the active tables are smaller, do I need to do trunk database since I
> deleted records from the active tables? Would this make it better? I don't
> know if I can do trunk one table instead of the whole database?
> If anyone has better idea regarding this case, it will be welcomed.
> Thanks
> Sarah
>|||Hi Mark,
Thanks for your response. Since our users don't inquiry historical data very
often, can I leave them on the same disk?
Also what I meant scanning is that user uses scanner to collect the barcode
into the system. The user does read and write on both tables.
I'm not sure if "Trunk database" make it better?
Also I found "tempdb" is growing than it should be, is that because I've
been doing queries in Query Analyzer on daily basis? I checked the
properties, allow auto grow at 10%. How can I tell the grow step for
"tempdb"?
Thanks a lot,
Sarah
"Mark Miller" <mark_n_0-s*p..am@.maxpreps.com> wrote in message
news:epNlypAsGHA.4608@.TK2MSFTNGP04.phx.gbl...
>I think you did the right thing by archiving the old data to historical
>tables. Partitioning the data like that will decrease I/O.
> You could go a step further and place the data on a separate set of
> physical disks if you need to access the archive data concurrently with
> the "active" data.
> I couldn't tell from your description which table the scanning (I'm
> assuming you were talking about a "scanner" and not referring to
> table/index scans) was accessing. But either way it appears that you're
> referring to a read operation. In this case you may want to investigate
> your indexes to see if you can find a better index for the query. Use
> Query Analyzer and look at the execution plan on the query and verify that
> it is using the index you are expecting. Also, you can call SET
> IO_STATISTICS ON before the query runs and check the output to look at
> scans and logical reads. This will help you determine if your optimization
> efforts are improving things or making it worse.
> Mark
>
> "Sarah" <sguo@.coopervision.com> wrote in message
> news:uBPMh4$rGHA.4264@.TK2MSFTNGP04.phx.gbl...
>> Hi,
>> I'm using SQL 2000. I have a simple database as backend for warehouse to
>> scan the products using access as front end. It includes 2 tables: order
>> and product.
>> Since everyday we have about 3000 orders to come in, each order has a few
>> products. The database is growing fast.
>> I can see the scanning process is a bit of slow at the front end. So I
>> kept current year data in both tables, and moved older data (3 years)
>> into historical tables within the same database.
>> Now, the active tables are smaller, do I need to do trunk database since
>> I deleted records from the active tables? Would this make it better? I
>> don't know if I can do trunk one table instead of the whole database?
>> If anyone has better idea regarding this case, it will be welcomed.
>> Thanks
>> Sarah
>|||Sarah wrote:
> Hi Mark,
> Thanks for your response. Since our users don't inquiry historical data very
> often, can I leave them on the same disk?
> Also what I meant scanning is that user uses scanner to collect the barcode
> into the system. The user does read and write on both tables.
> I'm not sure if "Trunk database" make it better?
> Also I found "tempdb" is growing than it should be, is that because I've
> been doing queries in Query Analyzer on daily basis? I checked the
> properties, allow auto grow at 10%. How can I tell the grow step for
> "tempdb"?
>
It sounds like you have poorly written queries running against the
database. With proper indexes and efficient coding, your tables should
be able to handle literally MILLIONS of rows. High tempdb utilization
indicates the use of temp tables or large ordering/grouping operations.
I would begin here:
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Use Profiler to identify the query that produces the highest number of
reads over an hour timespan, preferably a busy time of day. Determine
why that query produces the load that it does, and focus on optimizing
it. Repeat this process until your performance has improved.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Sarah,
Yes, you can leave the tables on the same disk. If by "Trunk database" you
want to shrink the database, no
that won't do anything other than reduce the current size of the db once
you've removed the historical data. So
unless disk space is an issue it won't do much for you. However, after
removing so many records from the table you might benefit from a either a
defrag or a rebuild on your indexes.
Tracy makes a valid point as well, you might want run profiler to look for
the offending query. My suggestions before we made under the assumption that
you already knew which query was slow based on the fact that you mentioned
the database only had 2 tables. I assumed that meant you probably didn't
have too many queries to weed through. But you should definately check the
query and verify the execution plan to make sure it is taking advantage of
your indexes. You want to see index seeks. If it is doing table scans check
your query. If you are seeing index scans it could still be the queries but
you'll want to start looking at the index and make sure they are optimal.
Mark
Wednesday, March 21, 2012
Help on a Query
- ID (autonumber)
- Address2 (text)
- Area (text)
Address2 usually contains an apartment number like "122 GV" but is always
blank when Area is "Duplex"
Area contains a string like "Glen View", "Northwind", "Duplex", ec.
I want a query to return the count of unique values for the combination of
(Address2 + Area) and the total count of IDs. The result must be grouped by
Area
I get all tangled up trying to code the count of unique values of Address2 +
Area. I think I am approaching it the wrong way. To get just the count of
IDs grouped by Area I use the following query which works fine but when I
start trying to add the unique values of Address2 + Area I can't find the
correct syntax.
SELECT ResidentList.Area, Count(ResidentList.ID) AS CountOfID
FROM ResidentList
GROUP BY ResidentList.Area
ORDER BY Count(ID) DESC;
--
Any suggestions are appreciated.Try this:
SELECT ResidentList.Area, ResidentList.Address2, Count(ResidentList.ID) AS
CountOfID
FROM ResidentList
GROUP BY ResidentList.Area, ResidentList.Address2
ORDER BY Count(ID) DESC;
ML
http://milambda.blogspot.com/|||I appreciate the response but that results in a row for every value of
Address2. I want the distict count
Wayne
"ML" <ML@.discussions.microsoft.com> wrote in message
news:A44A7033-F87F-4C4B-B713-149331B17AA6@.microsoft.com...
> Try this:
> SELECT ResidentList.Area, ResidentList.Address2, Count(ResidentList.ID) AS
> CountOfID
> FROM ResidentList
> GROUP BY ResidentList.Area, ResidentList.Address2
> ORDER BY Count(ID) DESC;
>
> ML
> --
> http://milambda.blogspot.com/|||Can you post the DDL for this and some sample data
"Wayne Wengert" wrote:
> I have a table (ResidentList) which includes the following fields
> - ID (autonumber)
> - Address2 (text)
> - Area (text)
> Address2 usually contains an apartment number like "122 GV" but is always
> blank when Area is "Duplex"
> Area contains a string like "Glen View", "Northwind", "Duplex", ec.
> I want a query to return the count of unique values for the combination of
> (Address2 + Area) and the total count of IDs. The result must be grouped b
y
> Area
> I get all tangled up trying to code the count of unique values of Address2
+
> Area. I think I am approaching it the wrong way. To get just the count of
> IDs grouped by Area I use the following query which works fine but when I
> start trying to add the unique values of Address2 + Area I can't find the
> correct syntax.
> --
> SELECT ResidentList.Area, Count(ResidentList.ID) AS CountOfID
> FROM ResidentList
> GROUP BY ResidentList.Area
> ORDER BY Count(ID) DESC;
> --
> Any suggestions are appreciated.
>
>|||Sorry - Should have included that in the original post... I needed to
obscure some personal information first.
========================================
=====
CREATE TABLE [ResidentList] (
[ID] [int] NOT NULL ,
[LastName] [nvarchar] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[FirstName] [nvarchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Address1] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Address2] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[City] [nvarchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[State] [nvarchar] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ZIP] [nvarchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Area] [nvarchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Phone] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Email] [nvarchar] (65) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(1,
'Burton', 'Betty', '2101 S. Podunk Ave', '139 GV', 'SomeCity', 'CA',
'12345', 'Garden View', '555-9024', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(2,
'Parker', 'Bonnie', '2101 S. Podunk Ave.', '138 GV', 'SomeCity', 'CA',
'12345', 'Garden View', '555-6738', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(3,
'Galloway', 'Richard', '2101 S. Podunk Ave.', '126 GV', 'SomeCity', 'CA',
'12345', 'Garden View', '555-3213', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(4,
'Martin', 'Ruth A.', '2113 S. Podunk Ave.', NULL, 'SomeCity', 'CA', '12345',
'Duplex', '555-1855', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(5,
'Adams', 'Ike', '2101 S. Podunk Ave.', '234 B', 'SomeCity', 'CA', '12345',
'Boulevard', '555-9679', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(6,
'Anderson', 'Barbara', '2101 S. Podunk Ave.', '039 GV', 'SomeCity', 'CA',
'12345', 'Garden View', '555-1286', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(7,
'Bachman', 'Joe', '2101 S. Podunk Ave.', '252 BH', 'SomeCity', 'CA',
'12345', 'Blue Heron', '555-5012', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(8,
'Bailey', 'Richard', '2101 S. Podunk Ave.', '156 BH', 'SomeCity', 'CA',
'12345', 'Blue Heron', '555-6454', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(9,
'Baker', 'Jewell', '2101 S. Podunk Ave.', '136 GV', 'SomeCity', 'CA',
'12345', 'Garden View', '555-37778', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(10,
'Ballard', 'Margie', '2101 S. Podunk Ave.', '043 BH', 'SomeCity', 'CA',
'12345', 'Blue Heron', '555-4623', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(11,
'Benson', 'Helen Louise', '2101 S. Podunk Ave.', '409 AL', 'SomeCity', 'CA',
'12345', 'Assisted Living', '555-0684', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(12,
'Bernhardt', 'Irene', '2101 S. Podunk Ave.', '130 B', 'SomeCity', 'CA',
'12345', 'Boulevard', '555-0771', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(13,
'Bigge', 'CArrine', '2101 S. Podunk Ave.', '121 B', 'SomeCity', 'CA',
'12345', 'Boulevard', '555-4052', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(14,
'Bischoff', 'Donna', '2101 S. Podunk Ave.', '135 GV', 'SomeCity', 'CA',
'12345', 'Garden View', '555-3739', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(15,
'Bishop', 'Ruth', '2101 S. Podunk Ave.', '204 AL', 'SomeCity', 'CA',
'12345', 'Assisted Living', '555-9241', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(16,
'Blomberg', 'Ruth', '2101 S. Podunk Ave.', '226 B', 'SomeCity', 'CA',
'12345', 'Boulevard', '555-9772', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(17,
'Boeke', 'Adah', '2101 S. Podunk Ave.', '427 BS', 'SomeCity', 'CA', '12345',
'Boulevard South', '555-5359', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(18,
'Bower', 'Dawn', '2101 S. Podunk Ave.', '137 GV', 'SomeCity', 'CA', '12345',
'Garden View', '555-2020', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(19,
'Brennesholt', 'Evelyn', '2101 S. Podunk Ave.', '315 AL', 'SomeCity', 'CA',
'12345', 'Assisted Living', '613-1558', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(20,
'Brown', 'Roberta', '2101 S. Podunk Ave.', '038 GV', 'SomeCity', 'CA',
'12345', 'Garden View', '555-8590', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(21,
'Brownlee', 'Jo', '2101 S. Podunk Ave.', '433 B', 'SomeCity', 'CA', '12345',
'Boulevard', '555-4432', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(22,
'Bruce', 'William', '2101 S. Podunk Ave.', '158 BH', 'SomeCity', 'CA',
'12345', 'Blue Heron', '555-0161', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(23,
'Bruce', 'Thelma', '2101 S. Podunk Ave.', '158 BH', 'SomeCity', 'CA',
'12345', 'Blue Heron', '555-0161', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(24,
'Bucklen', 'Helen', '2101 S. Podunk Ave.', '258 BH', 'SomeCity', 'CA',
'12345', 'Blue Heron', '555-0327', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(25,
'Bucklen', 'Sharon', '2101 S. Podunk Ave.', '258 BH', 'SomeCity', 'CA',
'12345', 'Blue Heron', '555-0327', NULL)
Insert into [ResidentList] ([ID], [LastName], [FirstName], [Address1],
[Address2], [City], [State], [ZIP], [Area], [Phone], [Email]) values(26,
'Callahan', 'Margaret', '2101 S. Podunk Ave.', '322 B', 'SomeCity', 'CA',
'12345', 'Boulevard', '555-8198', NULL)
========================================
=====
"Sha Anand" <ShaAnand@.discussions.microsoft.com> wrote in message
news:18A0273D-337D-4E90-916C-52EDA2A0DA3C@.microsoft.com...
> Can you post the DDL for this and some sample data
> "Wayne Wengert" wrote:
>|||try Count(distinct (Area+Address2))?
YH
"Wayne Wengert" wrote:
> I have a table (ResidentList) which includes the following fields
> - ID (autonumber)
> - Address2 (text)
> - Area (text)
> Address2 usually contains an apartment number like "122 GV" but is always
> blank when Area is "Duplex"
> Area contains a string like "Glen View", "Northwind", "Duplex", ec.
> I want a query to return the count of unique values for the combination of
> (Address2 + Area) and the total count of IDs. The result must be grouped b
y
> Area
> I get all tangled up trying to code the count of unique values of Address2
+
> Area. I think I am approaching it the wrong way. To get just the count of
> IDs grouped by Area I use the following query which works fine but when I
> start trying to add the unique values of Address2 + Area I can't find the
> correct syntax.
> --
> SELECT ResidentList.Area, Count(ResidentList.ID) AS CountOfID
> FROM ResidentList
> GROUP BY ResidentList.Area
> ORDER BY Count(ID) DESC;
> --
> Any suggestions are appreciated.
>
>|||I had tried that. I get an error that "distinct" is an undefined function in
the expression?
"YH" <YH@.discussions.microsoft.com> wrote in message
news:904262EB-0A7F-4111-99FA-123CF4CB36D3@.microsoft.com...
> try Count(distinct (Area+Address2))?
> YH
> "Wayne Wengert" wrote:
>|||Is this what you are looking at
SELECT ResidentList.Area, Count(DISTINCT ISNULL(ResidentList.Address2,''))
AS CountOfID
FROM ResidentList
GROUP BY ResidentList.Area
ORDER BY ResidentList.Area
Assisted Living 3
Blue Heron 5
Boulevard 6
Boulevard South 1
Duplex 1
Garden View 8
- Sha Anand
"Wayne Wengert" wrote:
> I had tried that. I get an error that "distinct" is an undefined function
in
> the expression?
>
> "YH" <YH@.discussions.microsoft.com> wrote in message
> news:904262EB-0A7F-4111-99FA-123CF4CB36D3@.microsoft.com...
>
>|||Sha;
Interesting approch but when I try it I get a "Syntax error (missing
operator) in expression 'Count(DISTINCT ISNULL(ResidentList.Address2,''))'
Wayne
"Sha Anand" <ShaAnand@.discussions.microsoft.com> wrote in message
news:F235F5F4-BD81-49FA-AFF6-CDD04ED92CC7@.microsoft.com...
> Is this what you are looking at
>
> SELECT ResidentList.Area, Count(DISTINCT ISNULL(ResidentList.Address2,''))
> AS CountOfID
> FROM ResidentList
> GROUP BY ResidentList.Area
> ORDER BY ResidentList.Area
> Assisted Living 3
> Blue Heron 5
> Boulevard 6
> Boulevard South 1
> Duplex 1
> Garden View 8
>
> - Sha Anand
> "Wayne Wengert" wrote:
>|||Hi Wayne,
The query i sent you works fine in SQL 2000,
by looking at the error msg - i assume that you are trying to run this
in MS-Access. All SQL 2000 queries may not be compatible with MS-Access.
You need to check MS-Access documentation.
Otherwise you can create a view in SQL 2000 and use it in MS-Access
- Sha Anand
"Wayne Wengert" wrote:
> Sha;
> Interesting approch but when I try it I get a "Syntax error (missing
> operator) in expression 'Count(DISTINCT ISNULL(ResidentList.Address2,''))'
> Wayne
> "Sha Anand" <ShaAnand@.discussions.microsoft.com> wrote in message
> news:F235F5F4-BD81-49FA-AFF6-CDD04ED92CC7@.microsoft.com...
>
>
Wednesday, March 7, 2012
Help moving from SQL Server Express to SQL Server Developer
I built a web application using my VS2005 installation, which includes SQL Server Express.
All works nicely. However, when I tried to move this application to a system with SQL Server Developer, I cannot get it to work at all - data access receives several errors, with the most common being this one:
Cannot open database "xyz.mdf" requested by the login. The login failed. Login failed for user 'MACHINE\ASPNET'.
Here is my connection string (with specific names changed, obviously):
Data source=MACHINE;Initial Catalog=xyz.mdf;Integrated Security=SSPI;User=Me;Password=MyPassword
I have given users MACHINE\ASPNET and MACHINE\Me all possible permissions (that I know) within SQL Server.
I have spent days on this, with no solution - help would be most appreciated.
use the following syntax for the connection string
<connectionStrings> <add name="DatabaseConnection" connectionString="Data Source=(local);Initial Catalog=Northwind;IntegratedSecurity=SSPI;" /></connectionStrings>|||
Thank you!
I must admit, I do not exactly understand the difference between my example and yours - but yours worked!
Much appreciated.
|||Hi,
This is because unlike SQL Express, connecting to the SQL Server Developer edition requires you to use the instance name instead of the database file name.
Also, you need to remove the username and password to make sure that you're connecting with the windows integrated authentication.