Jump to content

Pocket Query Generation Problems


Elias

Recommended Posts

I still think that having a standard pocket query that gives all caches within 100 mile radius of cities with a population of over 1 million people that could be downloaded from a separate page would greatly reduced the number of queries being run.

 

You could limit the number of "Super Queries" to 1 a day or better yet 5 per month. This would allow most people to get a query around their house or use up all 5 in a month for a trip.

 

It doesn't cover the entire US but I still think it would reduce a lot of the complaints about limits and setting up multiple queries sorted by date etc.

Link to comment

Would I be correct to think that there are several servers involved in processing pocket queries? It seems that there is the pocket query definition database, the cache and log databases, that are used to define queries and to process queries and there is an email server to send them out.

 

The query page first moves a query for today to the bottom of the list adding the date presumably when the query starts, then it adds the strike through to the title presumably when the query completes. There is then some delay for the email to be delivered.

 

Today my queries ran as they usually do according to the pocket page, however I haven't received the emails half a day later. This could be caused by the email sender or my ISP email receiver. I am receiving other emails today, including LOG messages for Watchlists and Bookmarks.

 

Is there a problem with sending pocket query emails today?

 

Nudecacher

Link to comment
Would I be correct to think that there are several servers involved in processing pocket queries?  It seems that there is the pocket query definition database, the cache and log databases, that are used to define queries and to process queries and there is an email server to send them out.

Yeah, that's pretty close. There is a dedicated PQ Generator server that runs all the PQs. It queries a separate dedicated database server with a copy of the Geocaching data just for PQs. Once the PQ has been generated, it gets handed off to our mail server for delivery.

 

Today my queries ran as they usually do according to the pocket page, however I haven't received the emails half a day later.

Your queries were delivered to your ISP within 30 seconds of being generated. I'll send you a PM with the log entries.

 

:blink: Elias

Link to comment
Wow, then I must obviously misunderstand how queries work, then.

 

Given I have to break up a given radius in blocks ordered by date:

 

Queries:

- certain distance from point limited to 500 total records between the dates Jan 01, 2001 and Dec. 31, 2002.

- certain distance from point limited to 500 total records between the dates Jan 01, 2003 and Dec. 31, 2003.

- certain distance from point limited to 500 total records between the dates Jan 01, 2004 and Dec. 31, 2004.

- certain distance from point limited to 500 total records between the dates Jan 01, 2005 and Dec. 31, 2005.

 

...takes no longer, or not much longer, to run than:

- certain distance from point limited to 2500 total records?

 

(with none of the results hitting the number limit.)

 

Curious.

 

But, I'd opt for a single 2500 query a day over 5 500 cache queries.  Bump the number of logs included and I could safely drop any one query to once a week without fear of missing something.

That's how I have my home state setup. It take 4 PQ's to get the entire state but will soon increase to 5. There are 1300+ caches I haven't found within 100 miles of me, so it would take 3 PQ's to get that. The 4th PQ is the outlying areas of the state. I don't necessarily need that data every week, maybe just once a month or so, but with the way the system is setup I can't easily change it.

Edited by Team GPSaxophone
Link to comment

FWIW, I have two daily PQ's that I haven't been receiving:

 

Orlando Unfound was last run on 7/11/2005 at 11:48:12 PM

Melbourne was last run on 7/13/2005 at 5:17:52

 

Based on the thread, I've throttled it back to 5 days a week instead of daily.

 

Hopefully this will be fixed soon. Since they haven't been generated in several days, and since I know there are new caches in my area, I guess I need to create a new one that does what my old one does...

 

:P

Link to comment

For years i am running 5 pocket queries every day. These 5 queries are enough for me to get all the caches in my Country, The netherlands, every day up to date.

The last sending was 7/12/2005 10:00:33 PM.

So, i am missing my queries already for 4 days.

 

I have checked all my queries, my alternate emailadres, and nothing is wrong here. :P

I hope the problems are soon exit. I am glad there are many more withe the same problems, so, it isn't a mailfunction with me, my server, or my equipment :P

 

It's not who you are, it's what you do that makes the difference.

Link to comment
For years i am running 5 pocket queries every day. These 5 queries are enough for me to get all the caches in my Country, The netherlands, every day up to date.

The last sending was 7/12/2005 10:00:33 PM.

So, i am missing my queries already for 4 days.

 

I have checked all my queries, my alternate emailadres, and nothing is wrong here. :P

I hope the problems are soon exit. I am glad there are many more withe the same problems, so, it isn't a mailfunction with me, my server, or my equipment :P

 

It's not who you are, it's what you do that makes the difference.

I am sorry, i used the wrong account for this question.

 

The problem is the same, the account is difference :lol:

 

So, when someone is looking for my running queries, search foor Team Dauwtrappers and not for Kruimeldief.

 

Team dauwtrappers,

a.k.a. Kruimeldief & KralenFee

 

All women tell me I'm a bad man, but wasn't Batman a good guy?

Link to comment
By allowing, say, 2500 caches in a single query, the number of queries could drop dramatically.

No. Not true.

 

1. The number of 2500 cache queries would increase dramatically, no matter whether the geocacher needs them or not.

2. 5 PQ of 500 apiece, or 1 PQ of 2500 is constant, and involves the same processing power, if not more, as the query time for each PQ would increase the time before another PQ could be processed.

 

2 low hanging fruit reasons why this idea does not help the issue.

 

It seems to me that one search of the database to find perhaps 2500 caches would be quicker than 5 searches for the same data, especially when PQ's are structured to report up to 500 caches, but are set to never find all 500.

 

As an example, I have two PQ's for Western Australia. The first PQ selects 500 caches caches set prior to a particular date (about 15th May 2005). The second PQ selects 500 caches set after that date. Neither search reports 500 caches, so both PQ's must search the entire database for each PQ.

 

It's true that to increase the selection count from 500 to whatever would slow down the running of that PQ, but that may be preferable to having a multitude of PQ's searching exactly the same data over and over again, when it isn't necessary.

 

I also have 11 PQ's that report on unknown type caches in the US. Each PQ is structured to report on up to 500 caches, but never find that many. I run these PQ's on demand only - perhaps once every 6 months, since I only look for ideas from them. But the point is that the entire database is searched for each PQ, that is 11 searches instead of just 1.

 

There has to be a compromise somewhere though. Setting the number to high may cause dramatic operational slowdowns in PQ operation times, and may also report on caches not required by users. There may also be limitations on PQ reporting because of email attachment size.

 

A suggestion is to increase the number of caches reported on to perhaps 1000 or 1500 or even 2500, and then email everyone who creates PQ's informing them of the change, and requesting them to alter their PQ's to suit. Then sit back and see if the load increases or decreases. B)

Link to comment
As an example, I have two PQ's for Western Australia. The first PQ selects 500 caches caches set prior to a particular date (about 15th May 2005). The second PQ selects 500 caches set after that date. Neither search reports 500 caches, so both PQ's must search the entire database for each PQ.

The entire database doesn't have to be searched if the database has an index on date. In that case only the geocaches with a date in the range you are searching for will be checked. Also if there is an index on country only those gecaches will be check as well. Considering how fast a PQ preview works, I am pretty sure that Jeremy and company have put a good bit of thought into what fields to index.

Link to comment
The entire database doesn't have to be searched if the database has an index on date. In that case only the geocaches with a date in the range you are searching for will be checked. Also if there is an index on country only those gecaches will be check as well. Considering how fast a PQ preview works, I am pretty sure that Jeremy and company have put a good bit of thought into what fields to index.

Thanks for the reply Allen - I hadn't even considered indexes. The last database (Pick) that I worked on didn't have that sort of thing. B)

Link to comment

I wouldn't need PQ's provided there is a page where downloads are available for all caches splitted up per country, state or part of it (say contains some 1000 caches per download and the files would be zipped).

I live in Belgium, that is some 1000+ caches, Germany could be split up in 10 groups or so of 1000 caches.

If these would be refreshed daily, that would be fine.

 

Many people (like me) use GSAK and the final queries are made within GSAK, so and as far as I am conserned, I don't need the PQ's with all their complexity.

Link to comment

I like the idea of having GPX downloads for all the caches in a country prerun, and available for download. I'm guessing that running those 300 odd queries would probably drop the number of PQ's that are executed quite drastically.

Link to comment

PQ's are used in various ways and are used because the appropriate alternative doesn't exist. Groundspeak maybe or probably doesn't know, but they could go out and find out ...

 

In many countries and states there are people who keep up to date theit national statistics.

In Belgium they publish lots of statistics : number of founds in Belgium by Belgian cachers, ...

This is done by analysing many PQ's.

It would be far simpler that Groundspeak would offer files with such statistics directly too, no PQ's needed anymore for that.

 

Every 3 weeks or so, I make a manual update of the all-world-finds of the top 50 Belgian cachers (by viewing some 70 pages). It is work that could easily be replaced by making national statistics available by Groundspeak.

I.m.o. these are very simple batch-jobs.

 

Personally, I like the trade off : reduce the PQ's size (say to 250) and offer standardized prerun queries (say of size 1000) that cover most demand.

 

A side remark, there is something like hardware planning, how does it come the hardware is not able to cope with the increased number of PQ's ?

Link to comment
Personally, I like the trade off : reduce the PQ's size (say to 250) and offer standardized prerun queries (say of size 1000) that cover most demand.

 

Sound like a great idea to me. But I'm no expert.

 

A side remark, there is something like hardware planning, how does it come the hardware is not able to cope with the increased number of PQ's ?

 

What kind of hardware / software is Groundspeak using nowadays? Is this listed somewhere?

Link to comment

Another suggestion.

Enable some standard queries that nearly all of us use.

Only allow these to be run on low server load days / times

i would guess the common ones are

Found by me

Owned by me

Not found by anyone at all (FTF hunting)

Close to home but not found by me.

 

Or allow us to generate PQ's that combine searches rather than exclude

for example it is impossible to set a single PQ for caches you own and caches you have found these have to be done on separate PQ's

Link to comment

Given the individual criteria that people use, that wouldn't be very easy to do. I filter out puzzles. Do you? What about people who want them? Or only them? People also want caches ranged out from their centerpoint. In large metropolitan areas, your centerpoint might be very different than someone elses who you know, cache with and attend events to keep up with. Here in SF/SJ CA we have 1000s of caches - pocket queries are the only way we can manage them.

 

Now that I've said that, I would like to say I can't understand why anyone would want 2500 caches in a query. I only have one 250 cache query (home) and a number of 100 cache queries centered on areas that I frequent - and I only schedule those on an as-needed basis. Still, if I want to run a query that has already run within the last few days I have to dummy it up in another one-time query.

 

Perhaps some way of including size with age when ranking the query queue?

 

and as well, I often look at inactivity of cachers when dealing with missing caches. They are legion, and they don't answer their email either. PQs should either be turned off for those not logged in within a month, or assigned a finite limit that needs to be manually refreshed. Eg, this query will run 50 times, then you have to reset it's limit. Every time it runs, that limit counts down towards 0.

Link to comment
Given the individual criteria that people use, that wouldn't be very easy to do. I filter out puzzles. Do you? What about people who want them? Or only them? People also want caches ranged out from their centerpoint. In large metropolitan areas, your centerpoint might be very different than someone elses who you know, cache with and attend events to keep up with. Here in SF/SJ CA we have 1000s of caches - pocket queries are the only way we can manage them.

 

Perhaps some way of including size with age when ranking the query queue?

 

I do my filtering with GSAK.

We only do puzzles that are near to home not while on holiday.

I made a guess at what the most common queries are.

Size yes one PQ to retrieve 11 caches hidden by me does seem a waste.

 

We only paid the membership fee to be able to have 5 pq's a day and 20 preset queries it would be good if we actually recieved what we are paying for.

Link to comment
Size yes one PQ to retrieve 11 caches hidden by me does seem a waste.

Sure does, that's why I went to the cache pages for the caches I own, and just did single page .gpx downloads, then merged them together offline, instead of wasting a PQ to get them. Now, if I had 100 or so hides, yeah, I'd probably use a PQ to get them all, but with just a handful, it only takes a few minutes to get them all one at a time.

Link to comment

While I understand the claim that the actually database query is running "fast enough" and size is a factor the issue must be something else.

 

A couple of ways of reducing the reducing the query time and the resulting file size; two additional options to the number of logs returned.

 

One, no logs. This would speed up the overall generation of the query and reduce the size of the GPX file and the associated costs.

 

Two, all logs of the past 7 days. While the PQ weren't designed for folks to keep off-line data it is happening. PQs are for too complicated and unreliable to only generate on the spur of the moment. So, with the help of GSAK folks like myself are keeping off-line databases of all the caches in our hunting areas. With the implementation of the option to only download the caches changed in the past 7 days the number of results are reduced. A compliment to this would be receiving only those logs written in the past 7 days, but not having a 5 log limit. If any one query returned only those caches with changes in the past 7 days and returned ALL changes i.e. any number of logs in the past 7 days, then any query run once a week would have ALL the data for that week. This would eliminate having to run a query multiple times a week in fear of missing a log.

 

Also, while the vast majority of caches don't get visited every week, it would follow that the vast majority of caches only get one or two logs a week. In most cases then, 5 logs is overkill.

 

So, by giving the option of getting no logs or only 1 or 2 logs because that is the number of logs written, this should reduce the query speed and the results returned. This would reduce the file sizes which should mean reduced time to compress the file, move the file, send the file, and reduced bandwidth.

Link to comment

Here's another thought or two:

 

Allow some of the more heavyweight features that you fear, but flag them to only be able to be run on certion days. I believe Elias said Monday through Thursday are considerably lower load days.

 

This way you could allow downloads of caches with all logs, a heavy query I seem to recall you saying in the past.

 

What about adding a BCC--and the resulting change to the TOU--and allow users to send other authorized users a PQ. I could set up a PQ to send every PM in SC a list of all caches in SC eliminating the need for every other user in SC to do it on their own. All you have to do is add the addtional field and account checking routines so non-PMs would not be put on the list. The administration of the list would be on the shoulders the PQ "owner " That query would run only once versus many times. The only thing I'm thinking would stay the same is the mail server would still be sending out the same number of emails.

 

Yes, I understand there will some abuses, but I'm sure the "Big Ideas Guys" could flesh this concept out to actually make it work. There is really no need for hundreds of people to have to run essentially the same PQ over and over individually when one would do.

Link to comment
What about adding a BCC--and the resulting change to the TOU--and allow users to send other authorized users a PQ.  I could set up a PQ to send every PM in SC a list of all caches in SC eliminating the need for every other user in SC to do it on their own.  All you have to do is add the addtional field and account checking routines so non-PMs would not be put on the list.  The administration of the list would be on the shoulders the PQ "owner " That query would run only once versus many times.  The only thing I'm thinking would stay the same is the mail server would still be sending out the same number of emails.

Cool idea - just a note, though: PQs do contain a flag indicating whether the cache was found by you already, so this kind of GPX would probably want to be one without any user-dependent data.

Link to comment
The key to the workarounds is understanding how the PQ queue works.  This is also documented on the "My Pocket Queries" page, but essentially, the order in which PQs are run is based on the "Last Generated" date, with the oldest queries running first.  Its almost a guarantee that a query scheduled for every day of the week won't run on Thursday or Friday, ever.  However, a query scheduled for just Thursday, or just Friday should run reliably each week. 

I'm afraid the workaround described below could just make things even worse as a lot of users will just make 7 identical queries and have each of them run only once per week on the different weekdays. This will bring them up in the queue but increase the number of PQ by quite a lot and slow things down even further. Correct me if I'm wrong.

 

/Jesper

Link to comment

I asked this before, but never received an answer. Would it be possible to add "presets" to the new pocket query creation form? I run a pocket query centered around my home coordinates for selected cache types. Instead of filling out all the required info every time I want to run this pocket query, it would be handy to be able to make a selection from a drop-down list which would fill in everything except the day of the week to run the query.

Edited by Rattlehead
Link to comment

Hello,

 

Last time my PQs were completely-perfectly executed was on July 12th. Since then, I receive some of them or nothing at all. I only got 3 PQs running 3 times per weeks. It's becomming frustrating. :o

 

I guess GS are working for solutions. Hopefully it will works before my vacations! <_<

 

Anyways, ciao!

 

Mart

Link to comment

My problem was, I ran a PQ (Friday morning) for the Campbell, Ca. area, there is an event there today. I checked the ACTIVE caches box, and all the PQ's were for IN ACTIVE. The preview was ok but what was on my PDA was not. I am glad I caught it in time.

 

Don

Link to comment
Now that I checked the forum, I can see it is an ongoing issue for everyone.

Not trying to minimize the pain of PQ problems...

 

But it's not a problem for "everyone". I had some trouble with delivery a few weeks ago when there was an e-mailing problem, but other than that I haven't had problems.

 

I mention this because the problem you have is probably not "general", so I don't think you can rely on a fix by waiting for "THE" PQ problem to be solved.

 

Specific contact with Groundspeak with your exact issue might be best...

Link to comment

Has any analysis been done on common pocket queries? If say 1000 people request the same query on a daily or even weekly basis perhaps that query could only be processed once and the result cached. The cached copy could then be sent to the 1000 people.

 

For example I like to keep, in GSAK, all the current caches in the UK. I imagine that there are alot of others in the UK who do the same. At the moment I run about 5 queries a week to maintain the data and sometimes I re-import it all (15 queries) when it gets a bit stale.

 

If lots of other people do the same then perhaps Geocaching.com should have the ability to send one compressed gpx file per week, to all UK paid members who want it. This would then only cost:

 

+ The time taken to process this one single query once

+ The bandwidth needed to send it to X number of UK paid members

 

The file could even be distributed using BitTorrent to save the bandwidth.

 

Dan

Link to comment

Been ignoring this thread till now, and thats mostly skimming...

Whiles everyones throwing out ideas to help lessen the PQ load, I'd like to suggest (if it hasn't been already) that PQs be made to default (another check box maybe??) to running once and then becoming disabled (not deleted!).

From fhis thread its apperent there is a wide range of people schedule their PQ. For myself I usually leave them all diabled and run as needed. I have many PGs set up, a couple local, couple more for places I visit a few times a year, and one or two more for one time visits thats I edit for the destination before running.

What sometimes happens is that i'll not uncheck last weeks PQ in time and end up getting one of those too.... Which was a waste of time to run, and just ends up being extra junk to be deleted from my inbox. Yes I could use the 'run once and delete' thing, but then I'd have to look up zip codes or coordinates...

I doubt this would make any large impact on the current problem, but while possiable changes to the PQ setup are being considered I wanted to throw that in.

Link to comment

Aaaaagh!!!! I just spent about a half hour on a detailed reply/request for this thread then took a power hit. Gone!!!

 

A very long story short(probably for the better). I currently, as some others have mentioned, run a set of 4 queries for ALL caches that I have not found in my state(currently about 1700 I think). I also fairly regularly run a set of 5(soon to be 6) queries for a neighboring state. While TOTAL PQ overkill it's the easiest way to keep my local data current. I dump it ALL into GSAK on the laptop when ready to go along with maybe a last minute local update(to update recent logs) if we know were headed to a very specific area and go.

 

A MUCH easier and highly more efficient(if it's possible to program) would be a query for a WHOLE state that shows EVERYTHING that has changed status for the last week, month, whatver. The query would need to supply ANY cache that has gone from active to temp. disabled or archived and the opposite. I'd guess that the average amount of changes for the last week normally would be 100 or so caches. Maybe I'm wrong on that but certainly WAY less than 500.

 

The alternative is to just run the current sets of queries that total near 4000 or caches as often as I can til some better query comes available. I have already voluntarily cut down the number of times I run the nearby state(which I often have to edit for 500 cache limits) and for sure do not run my home state as often as I once did. IT SURE WOULD BE NICE TO GRAB THE WHOLE STATE ONCE THEN BE ABLE TO EASILY(with much less load on the system) KEEP IT CURRENT.

 

Please!!!

Link to comment
For myself I usually leave [my PQs] all diabled and run as needed. <snip>

 

What sometimes happens is that i'll not uncheck last weeks PQ in time and end up getting one of those too.... Which was a waste of time to run, and just ends up being extra junk to be deleted from my inbox.

We agree.

Link to comment
A MUCH easier and highly more efficient(if it's possible to program) would be a query for a WHOLE state that shows EVERYTHING that has changed status for the last week, month, whatver. The query would need to supply ANY cache that has gone from active to temp. disabled or archived and the opposite. I'd guess that the average amount of changes for the last week normally would be 100 or so caches. Maybe I'm wrong on that but certainly WAY less than 500.

As someone who has to run 6 PQs to get all the caches in 100 miles of me that I haven't found, I feel your pain. However, one of the reasons I request the PQs is to get the latest log entries. Once you include logs as triggering your 'something has changed' query, the savings isn't nearly as dramatic, and given that the query would be quite a bit more complex, who knows what the actual savings would be.

 

One interesting alternative would be to just request a 'new hide' query, which you would keep sliding up the date to keep the size below 500. Then subscribe to the 'insta-notify' for disable/enable and archive/unarchive. I would guess that the number of disable/enable, archive/unarchive would be low enough for you to go in and manually update GSAK with the current status. If not, i bet it's possible for some smart person to get an e-mail bot to do it auto-magically. It's certainly worth a try. Given that you can put in an alternate email address, it should be easy enough to get these to an email bot.

 

--Marky

Link to comment

On second thought, the insta-notify feature isn't quite flexible enough to do this for a whole state, so this wouldn't work for someone who is trying to keep an offline DB for the whole state. It might work for someone only interested in a 50 mile radius (which I think is the current limitation of the insta-notify feature).

 

--Marky

Link to comment
What day has the lowest load currently. I only need my PQ's once a week but would like them three times. I am willing to schedule them for just once until the issue is resolved.

Tuesday is by far the lowest day. The order from lowest to highest looks like this:

 

1. Tuesday

2. Sunday

3. Saturday

4. Wednesday

5. Monday

6. Thursday

7. Friday

 

Tuesday, Saturday, and Sunday are all pretty close. Wednesday is only slightly higher. After that, the numbers get really big in a hurry.

 

:lol: Elias

Now that you've posted these best days of the week in order.... and users have altered their days of searching.... are these days still accurate?

Link to comment

I just received a PQ today. It had the following information in the email:

 

Please note: Due to the recent issues with the Pocket Query Generator, you will need to click on the link at the end of this email to re-run this query a week from today. This is a temporary solution until we can speed up the Generator. Sorry for the inconvenience.

 

I clicked the link, but nothing appeared to happen, other than I was taken to my PQ page.

 

Is this what I should have seen? I would have thought I would have gotten a message that my PQ was scheduled for next week, or something.

 

Thanks for trying!

Link to comment
I just received a PQ today. It had the following information in the email:

 

Please note: Due to the recent issues with the Pocket Query Generator, you will need to click on the link at the end of this email to re-run this query a week from today. This is a temporary solution until we can speed up the Generator. Sorry for the inconvenience.

 

I clicked the link, but nothing appeared to happen, other than I was taken to my PQ page.

 

Is this what I should have seen? I would have thought I would have gotten a message that my PQ was scheduled for next week, or something.

 

Thanks for trying!

Is anyone else getting this?

 

My PQ-to-GSAK functions are automated and I never see the mail the PQ is wrapped in. I'm sure there are plenty of other folks in the same boat. Few will ever get the message they are about to have their PQs turned off automatically.

 

Where's the popcorn as this is going to get ugly real quick.

Link to comment

I went back and my PQ from yesterday (Tuesday) did NOT have this message in the email.

 

I cannot get gsak to automatically get my email from behind the "Big Brother" firewall here at work, so I drag-and-drop here at work.

 

At home, I don't have that problem, and would have never seen the notice in the email.

 

An announcement would probably be in order....

Link to comment
I just received a PQ today.  It had the following information in the email:

 

Please note: Due to the recent issues with the Pocket Query Generator, you will need to click on the link at the end of this email to re-run this query a week from today. This is a temporary solution until we can speed up the Generator. Sorry for the inconvenience.

 

I clicked the link, but nothing appeared to happen, other than I was taken to my PQ page.

 

Is this what I should have seen? I would have thought I would have gotten a message that my PQ was scheduled for next week, or something.

 

Thanks for trying!

I see that too. Since they say it's temporary, I imagine it's to clear out the deadwood - PQs created by people who are no longer caching, but they keep running week after week.

Edited by Prime Suspect
Link to comment
My PQ-to-GSAK functions are automated and I never see the mail the PQ is wrapped in. I'm sure there are plenty of other folks in the same boat. Few will ever get the message they are about to have their PQs turned off automatically.

 

Where's the popcorn as this is going to get ugly real quick.

If you've automated the handling of your PQ e-mails, it's simply a matter of going to your Pocket Query page at least once a week to re-check the boxes manually for the following week's desired queries.

 

I know that every time I go to my summary page, I spot a query or two that I really don't need to be generating anymore -- that one for the event from last month, or the one for my parents' home area that doesn't really need to run again until my next trip up to see them. So reviewing the list is a useful exercise for me.

 

At least as a short-term fix, I would gladly trade having to re-check some boxes in exchange for an improvement in the reliability and timeliness of the PQ files that I actually need.

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...