You didn't specify how long ago this was.
1998-1999
You didn't specify how long ago this was.
1998-1999
I faced a similar problem when I created and maintained the corporate side databases to mirror the EMS (AGC/SCADA) real time data. One month of data was roughly 300 meg and it was critical that this be available 24x7. Sometimes databases break and it is important to recover them as quickly as possible. Knowing that around 90% of the queries were on the current month, 5-10% on the previous month, and only rarely on older months, I created a new set of tables for each month, and views that combined the previous, current, and next months. Inserts were only ever done on the current month. Recovering any particular month that got corrupted was a matter of minutes rather than hours. Inserts were done on the base tables, and selects (almost always) on the views.
I wrote scripts to automatically create the new databases and the views. When corporate IT decided to create their own copy I heard that their database people were trash talking my setup behind my back. I sat down with one of them (who had been a part of the Process Control Group which I joined when I was hired) and explained my reasoning. They ended up doing it my way.
Is that list incomplete or straight-up wrong?
I have no way of knowing. If I add a bunch of numbers and get an answer, is it right or wrong? Again, as a guess, if you have enabled the correct flags and let the system run through typical processing, the returned list should be, according to the docs, a list of unused indexes. Worst case scenario, if you delete an index and your processes slow down then you probably deleted an active index. In that case just recreate it. Nothing lost.
Certainly the technology and tools have changed but principles of design and good programming styles have not. Python has, for the most part, replaced (effectively) dead languages like COBOL, and now-niche languages like FORTRAN, but structured programming and modular design are still essential techniques. I have seen too much code written by self-taught programmers who, through ignorance, eschewed good design. People still need to be told what practices may cause grief down the road. There is an old saying about there being two kinds of fools. One says, "because it's old, it's bad". The other says, "because it's new, it's better."
I thought it would be as simple as
DROP INDEX index_name ON table_name;
As for removing only unused indexes, I don't know how MySql would be able to determine if an index is used, or not. You'd have to decide. Since indexes are used to find things quickly I would imagine that if you delete an index and then take a performance hit it's likely not an unused one.
Agreed. But programming basics are still something people need help with, and the basics have not changed appreciably over my career. Dinosaurs have more free time than those many decades younger. I, for one, have a lot and I am willing to spend part of it here.
(where people abandoned us for sites like Stack Overflow)
But there are still a few dinosaurs around who can answer some programming questions.
Where is everyone?
Most of the posts here seem to be the same words about SEO/digital marketing over and over. That doesn't attract too many eyeballs. There is very little programming content any more.
No, Javascript cannot run/start executables on the client machine.
Technically correct but there are ways around it. For example, save a file in a special folder on the target computer, which has a folder watch on that folder. The watching task could then trigger a local task.
In that case I can't suggest anything. I've never had to update multiple databases for any of the systems I set up.
As far as I know you can connect to more than one database at a time but you require a separate connection object for each one. Since queries go through the connection object you can't run a query on more than one db at a time. It seems to me that transactions are also connection based so you would have to manually roll back a transaction on A if the commit on B fails.
How about posting some code?
<edit> Sorry. I didn't scroll back far enough to see the link.
Still waiting for a question.
Generally it is hard to offer SQL suggestions without seeing the code. Are you closing the connection manually, or are you using a context manager?
I suppose that will partly depend on whether or not the inserted records are an "all or nothing" transaction. When I had to bulk insert 8000+ records at one time it was important that either all the records go in, or none. Recovering from a partial insert would have taken hours and would have interfered with further (every five minutes) inserts as well as the downstream processes.
I can't speak on optimization in general without seeing the code. In my previous life I was a Windows SysAdmin/dbadmin as well as a digital plumber. I wrote many apps that had to move large quantities of data from place to place, for example, importing 16000+ records into a SQL database every five minutes. I did all this with vbScript (today I would choose Python). The trick to processing that many records quickly was using vbScript to format the records, then using BULK INSERT
to insert all of the records in one transaction. This drastically reduced the processing time by not having to submit each insert separately. The import load later grew to 16,000 records on the hour and 16,000 at five past the hour, plus the regular 16,000 every five minutes. Scripting easily handled the load. You could easily write the massaging code in c/c++ and compare it to the equivalent in Python. Considering the overhead for file I/O would be the same in both, I'd be surprised if the difference was significant.
Why is it your job to determine it?
It isn't my job to determine it but I think it is a fair question.
The only thing that is against our community rules is to not specifically ask for help to do something illegal
I did not flag the post as violating any rules. I also did not down-vote the post. I wasn't trying to suggest that the intentions were illegal, and there is no way to determine the OP's actual intentions, but if someone were to ask me "where can I learn how to pick locks", I would ask the same question. Hacking skills are a dangerous tool set likely far more often used to bad ends rather than good.
How are we to determine whether you want to learn hacking for ethical, or for non-ethical use?
I have a friend who spent the better part of a career doing SQL. I wrote up your question and sent it off to him. Just for sh!ts and giggles, he decided to feed it to ChatGPT first. He said that what he got back was what he would have written if he spent a lot of time researching. Here is what ChatGPT said...
Yes, you're correct that efficient indexing plays a crucial role in optimizing the SELECT/WHERE part of the query. However, when it comes to improving the efficiency of the HAVING part, there are several strategies you can employ:
Indexing: Just like with the WHERE clause, appropriate indexing can improve the efficiency of the HAVING clause. If the columns used in the HAVING clause are frequently filtered on, consider creating indexes on those columns. However, be cautious with indexing as it comes with overhead and can affect write performance.
Optimize the Query: Ensure that your query is optimized and written in a way that allows the database engine to execute it efficiently. Avoid unnecessary joins, subqueries, or complex expressions in the HAVING clause that can slow down the query processing.
Aggregate Functions: If possible, try to use more efficient aggregate functions in your HAVING clause. Some aggregate functions might be more computationally expensive than others. For example, SUM() might be more efficient than COUNT() in certain scenarios.
Limit the Result Set: Reduce the number of rows processed by the HAVING clause by applying more selective conditions in …
I'm not very familiar with HAVING but my understanding is that it is used to filter results after a GROUP operation so I can't imagine that indexes would improve performance other than on the original SELECT. Using WHERE would return rows based on one or more criteria, and would benefit from indexing, but HAVING, as I understand, is performed after the selection and grouping.
Try
import re
pat = '<td>(.+?)</td>'
for line in open('yourfile.html'):
if line.startswith('<tr align="right"><td>'):
print(re.findall(pat,line))
I realized that findall is cleaner than split. You might want to have a look at this regex online tool
You can either read the entire file into a list, then filter that list, or you could process it line by line and process each matching line. For example (using my file)
for line in open('usblog.txt'):
if '2024-01-24' in line:
print(line)
or
text = open('usblog.txt').readlines()
for line in [x for x in text if '2024-01-24' in x]:
print(line)
if
<tr align="right">
only appears in the lines you want then filter on that.
Just process the file line by line and apply the regular expression to particular lines. I can't give you an expression that matches only the lines you showed me with a guarantee that in matches nothing else without seeing the entire file.
The trick is to use lazy matching which matches the shortest possible string.
html = '<tr align="right"><td>236</td><td>Roy</td><td>Allyson</td>'
pat = '<td>(.+?)</td>'
then
re.split(pat,html)
returns
['<tr align="right">', '236', '', 'Roy', '', 'Allyson', '']
and
re.split(pat,html)[1::2]
returns
['236', 'Roy', 'Allyson']
For
html = '<tr align="right"><td>236</td><td>Roy</td><td>Allyson</td>'
pat = '<td>(.+?)</td>'
then
re.split(pat,html)
returns
['<tr align="right">', '236', '', 'Roy', '', 'Allyson', '']
and
re.split(pat,html)[1::2]
will return only
['236', 'Roy', 'Allyson']
The expression .+?
does a lazy match (returns the shortest possible string that matches the pattern.
I loved that one when it first came out. And, yes, that is indeed FORTRAN. The system also came with a preprocessor called SFORX which added structured statements ($IF-$ELSE, $WHILE, etc.) but for some reason it was not used in this case. Just for fun I eventually rewrote the above with no GOTOs to prove to a co-worker that it could be done.
As an example, here is some code we got from a vendor. It is rock-solid. It does what it is supposed to but it is virtually unmaintainable. I can't begin to imagine how it was ever debugged.
SUBROUTINE READALL(UNIT,BUFF,*,*,IFIRST)
C*TTL READALL READ FROM ANYTHING
C THIS SUBROUTINE IS USED FOR READING COMPRESSED OR UNCOMPRESSED
C DATA INPUT FILES FROM LFC 'UNIT' INTO 80 BYTE ARRAYY BUFF.
C END RETURNS TO FIRST RETURN, ERROR RTETURNS TO SECOND
C IFIRST IS LOGICAL WHICH CALLER PROVIDES TRUE ON
C FIRST READ OFF OF THIS ALLOCATION OF THIS LFC, ELSE FALSE
C*PGMXX READALL SAVD TOOLS FS$READA ON 07/23/80 17:43:51 01819B00R01
LOGICAL IFIRST
INTEGER*1 BUFF(80)
INTEGER*4 UNIT
INTEGER*1 BYTE(120,2)
INTEGER*1 IN(2)/2*0/,ISP(2)/2*0/,ITX(2)/2*0/,NB(2)/2*0/
INTEGER*1 NBT(2)/2*1/
INTEGER*1 KCR/ZBF/,KLR/Z9F/
INTEGER*1 EOR/ZFF/
INTEGER*2 NSEQ(2)/2*0/,ITY(6),ISEQ(60,2),ICKS(60,2)
INTEGER*4 LFC(2)/2*0/
INTEGER*4 K,N,IERR,IN0,ISP0,ITX0,NB0,IS,I,ISUM,J
EQUIVALENCE (BYTE(3,1),ICKS(1,1)),(BYTE(5,1),ISEQ(1,1))
IF(.NOT.IFIRST) GO TO 21
DO 19 K=1,2
IN(K) = 0
ISP(K) = 0
ITX(K) = 0
NB(K) = 0
NBT(K) = 1
NSEQ(K) = 0
LFC(K) = 0
19 CONTINUE
21 CONTINUE
DO 101 N=1,2
IF (UNIT.EQ.LFC(N)) GO TO 103
IF (LFC(N).EQ.0) GO TO 102
101 CONTINUE
GO TO 94
102 LFC(N) = UNIT
CALL W:PDEV(UNIT,ITY)
NBT(N) = 1
IF (ITY(3).GE.4.AND.ITY(3).LE.6) NBT(N)=2
103 IERR = 0
IN0 = IN(N)
ISP0 = ISP(N)
ITX0 = ITX(N)
NB0 = NB(N)
1 IF (IN0.NE.0) GO TO 8
2 CALL BUFFERIN(UNIT,0,BYTE(1,N),30)
CALL M:WAIT(UNIT)
CALL STATUS(UNIT,IS,NB0)
IF (IS-3) 3,80,90
3 IF (BYTE(1,N).EQ.KCR.OR.BYTE(1,N).EQ.KLR) GO TO 6
NB0 = NB0*NBT(N)
DO 4 I=1,NB0
IF (BYTE(I,N).EQ.10.OR.BYTE(I,N).EQ.13) BYTE(I,N) = 1R
4 BUFF(I) = BYTE(I,N)
IF (NB0.GE.80) GO TO …
Are you differentiating between design and programming? Some professionals get stuck in their heads. For example, too many surgeons feel that surgery is the first option. My job as a professional programmer was not primarily to write code. My job was to provide solutions to problems. Ideally that meant writing as little code as possible. If coding is required then it should be as clean and as clear as possible. Above all it must be maintainable, and not just by you.
Knowing how obtuse some error messages are, could it be referring to one of the URLs in either or both of the xml files?
Typing your title into google gives...
An interface is the connection between systems or applications, while a protocol defines the rules for data exchange between these systems or applications.
Still working on it, but to match one or more occurrences of string/
starting at the beginning of the string you would specify
^(string/){1,}
then if you replaced the matched expression with "" you would end up with what you want.
You don't need a regular expression for that. Just replace all occurrences of "string/" with "".
Can you explain one more time in more detail? I couldn't make any sense out of your explanation.
Another possible test if you are so inclined is to boot off a linux live usb and check the bluetooth with that.
You could uninstall the latest update and see if that fixes it. It could also be a coincidence where your Bluetooth hardware has failed. The problem with uninstalling is that Windows Update will just download an reinstall the update so in order to prevent that you could do one of two things
Windows Home users do not typically have access to gpedit.msc, however, it can be enabled. Copy the following lines into the file enable-gpedit.cmd
then run it in an admin shell window:
dir /b %SystemRoot%\servicing\Packages\Microsoft-Windows-GroupPolicy-ClientExtensions-Package~3*.mum >List.txt
dir /b %SystemRoot%\servicing\Packages\Microsoft-Windows-GroupPolicy-ClientTools-Package~3*.mum >>List.txt
for /f %%i in ('findstr /i . List.txt 2^>nul') do dism /online /norestart /add-package:"%SystemRoot%\servicing\Packages\%%i"
You can then run gpedit by typing gpedit.msc
. Expand
Local Computer Policy
Computer Configuration
Administrative Templates
Windows Components
Windows Update
Manage end user experience
Double click in the right panel on Configure Automatic Updates
then select the Disabled
radio button. Note that this will disable all updates to remember how you did this so you can re-enable it later.
GPedit lets you tweak a pile of things that would otherwise require registry hacking.
A comment with an up vote appears to be worth more "points."
Except in the Community Forum where (like Whose Line Is It Anyway) the points don't matter.
Exactly. The comment tool is for brief comments that do not significantly add to the conversation. There is no character limit when actually replying.
The first thing to do with any infrastructure you set up is to change the default passwords. You'd be appalled to find out how many breeches occurred because a sysadmin left admin/password as the defaults.
I haven't gotten to that chapter yet ;-)
My fault on that. For some reason I was thinking 75 thousand, not million. The concurrency aspect intrigued me.
"""
Name:
Thread.py
Description:
Simple (I hope) example of how to run a piece of code in a separate thread.
This code is the result of a question from a user who needed to generate
many random numbers, but found it was slowing his application down.
Random_Pool subclasses threading.Thread. To create a self-filling pool of
random numbers you create a Random_Pool object, giving it the maximum pool
size, and the minimum and maximum integer values you want to generate. You
can also give it a name.
To start auto-generating random numbers you use the start method. You do
not call run directly.
To get a random number call get_num()
The test for whether the pool is less than full runs on a 0.01 second
sleep loop.
When you are done with the pool you have to tell the run method to complete.
You do this by calling the set_done() method. Following that you must join
the pool thread to the main thread by calling join().
Comment out the print statements for production code.
Audit:
2022-07-08 rj original code
"""
import threading
import random
import time
class Random_Pool(threading.Thread):
def __init__(self, name='pool', max_size=100, min_num=0, max_num=1000):
threading.Thread.__init__(self)
self.name = name
self.pool = [] # a list containing the generated random numbers
self.size = max_size # maximum number of random numbers at any given time
self.min = min_num # minimum random integer to generate
self.max = max_num # maximum random integer to generate
self.done = False # when True the run method will complete …
I'm just coding it up. I'll post it with comments shortly.
That sounds like the perfect scenario to create a thread that generates the random numbers. That would allow the generation to be done concurrently with your game. The thread could maintain a queue of n random numbers, automatically generating new numbers to maintain the pool as you make pull requests. If you don't know how to do this I could help you out. Coincidentally I am currently making my way through "Quan Nguyen - Mastering Concurrency in Python".
In English, what you want is
Just turn that into code.
Plug in a few test numbers. What happens if only 35 hours are worked? In that cast the overtime is -5 hours and you get a negative OT pay. Write your program in pseudo-code. Write out the steps you would do if you were doing the accounting by hand. Try out a few numbers like 35, 40, and 45 to see if you get sensible results.
I never look at anything on the front page.
What have you tried so far?
My suggestion is to install the free version of Macrium and take a complete image of your C partition. Also create bootable recovery media (a small USB memory stick) using the tools from within Macrium. Then you can reformat and re-install Windows. If you find that you are missing any files you can mount the Macrium image as a virtual drive and browse/recover any missing files. As a bonus, if the install goes horribly wrong you can always restore the old image after booting off the previously created recovery USB.