LaxLoafer 71 Posting Whiz in Training

Hosting on a static website, no database? You might still be able to achieve something close to what you want with server-side includes if it has been enabled.

LaxLoafer 71 Posting Whiz in Training

Cutting and pasting 1.6 million lines of data? I've never tried that before but something tells me it's not going to work :-o

One issue lies in the way text is stored within a PDF. Strings of text are typically broken up in to arbitrary chunks, and not necessarily stored in the correct reading order. Trying to determine which chunks belong to which column, paragraph or sentence can be a challenge at times and occasionally bordering on the impossible.

Unless the PDF has been previously tagged of course. Tagging the contents of a PDF can offer a way to extract data in the correct reading order.

If resorting to the original data that generated the PDF is not an option, try exracting the data with a third-party component. Search the web for "VB.NET PDF component" and you'll find there are several on the market. Different components are likely to produce different results because text extraction is not a trivial task, so it's worth trying a few out to discover which one works best for you.

LaxLoafer 71 Posting Whiz in Training

Which of their SDKs are you using? Note that SecuSearch SDK Pro for Windows features 1:N matching.

rproffitt commented: +1 matching. +8
LaxLoafer 71 Posting Whiz in Training

A way to match fingerprints is already provided with the SDK, apparently...

"SecuGen SDKs make it quick and easy to integrate SecuGen fingerprint scanning, template generation (minutiae extraction), and template matching functions (both one-to-one and one-to-many) into almost any type of sotware application.", Source: http://www.secugen.com/products/sdk.htm

Does the documentation not include some example code?

savedlema commented: the SDK sample code do not talk about 1:M identification. +2
LaxLoafer 71 Posting Whiz in Training

You won't catch me saying this often but may I suggest you Google it?

My reasoning is this: any SEO expert with a greater understanding of ranking factors than another is more likely to out rank them in search results. Q.E.D. So the best SEO tools are among those found at the top of Google's search results. Have you tried the first page?

MMHN commented: Right +0
LaxLoafer 71 Posting Whiz in Training

To find files use the DirectoryInfo.GetFiles method. It returns an array of FileInfo objects. Use the Random class to generate a number that can be used as an index for the FileInfo array. Becareful to specify an index that is within bounds.

Selecting pictures at random will mean that occasionally you'll see one appear multiple times in succession, which might not be quite what you're expecting. If that's the case you'll need to have a think about defining your algorithm to prevent this from happening - a trivial problem which I'll leave for you to figure out. Have fun.

LaxLoafer 71 Posting Whiz in Training

Omokanye,

The issue could be a faulty cable, monitor, or video adapter. See if you can deduce the cause by replacing each of these one at a time. Turn off the monitor and computer between tests to avoid damaging the equipment. Look for worn or corroded connectors and broken soldered joints.

If you need further help, start a new forum thread and you'll get a faster response. This one is very old.

LaxLoafer 71 Posting Whiz in Training

The ratio of 'do' follow to nofollow links is not a useful indicator of site quality, as far as I know.

More important is the rate of increase, and it seems likely to me that do follow and no follow share different rates. Links acquired through social media or guest blogging are typically nofollow links. They're easy to come by, enjoy little SEO value, and can probably be accumulated safely at a faster pace. 'Do' follow links on the other hand are generally harder to earn, require more effort, and so accumulate more slowly.

If your link profile suddenly saw an increase in 'do' follow links, without a corresponding increase in nofollow links, I think that would look very suspicious. To play it safe, let your naturally gained 'do' follow links dictate the pace at which you use social media to gain no follow links.

LaxLoafer 71 Posting Whiz in Training

No problem. Would you like to share your opinion with us?

LaxLoafer 71 Posting Whiz in Training

Sonipat, your post appears to have come from another forum. See this one, circa 2007.

Please refrain from copying the work of others and make yourself aware of Daniweb's rules.

Scraping content typically violates copyright laws and damages the good reputation of Daniweb. Additionally questions like the one you've asked will waste the time of anyone willing to respond. Please stop.

LaxLoafer 71 Posting Whiz in Training

It's my understanding that buying and selling links is acceptable to Google, providing they don't pass PageRank. However if the intention is to influence search results then you'll be violating Google's guidelines. If (when) you get caught, don't be surprised to find your site missing from search results.

To prevent links from passing PageRank they need to be marked as 'nofollow', e.g. <a href="http://www.example.com" rel="nofollow">Click here</a>. Alternatively this can be done at page-level with the 'robots' meta tag.

Google uses 'nofollow' to help identify paid links, as do Bing and Yahoo. There are plenty of legitimate reasons for buying and selling links, such as advertising, or in exchange for goods or services. (BTW, if Microsoft would like to gift me an XBox I'd be delighted to blog about how wonderful it is).

While back links marked with nofollow have little or no SEO value, let's not forget they still have the potential to increase traffic. And if placed on highly relevant sites, you might expect to see an improvement in bounce rate - another metric that provides a strong signal - but target the wrong audience and your bounce rate will suffer horribly. So it's important to exercise a level of discretion over where links appear.

Don't buy links in bulk. If a seller offers 1000s of 'do follow' links on high PR sites, run a mile.

LaxLoafer 71 Posting Whiz in Training

Google's Webmaster Guidelines is usually a good starting point.

LaxLoafer 71 Posting Whiz in Training

So, Google Fetch returned an HTTP 404 error?

You can rule out issues with robots.txt. The file tells bots which resources it should not request. You simply would not receive an HTTP response (or error) because a well-behaved bot will not make requests for blocked resources.

The robots meta tag can also be ruled out. The tag is embedded in an HTML document, and it can only be read if the page is retrieved, which would mean an HTTP 2xx error, document found.

It's probably not a firewall issue because other pages on the site can be accessed, or a permissions issue as that would result in an HTTP 403 Forbidden error.

Sitemaps tell bots where to find resources on a host. If the sitemap contains errors, such as bad URLs, it will cause the web server to return an HTTP 404 Not Found when a bot attempts to download the resource.

I'd take a closer look at the URLs in your sitemap. Watch out for any unusual characters that might cause bots to truncate URLs. For example I have known GoogleBot to read URLs like http://example.com/page[1]/ as http://example.com/page , unless the square brackets were encoded as %5B and %5D.

Another possibility is a misconfigured redirect, but then why would that affect search engines and not all visitors to your site? If you do find a redirect is responsible it's probably safer to remove it. Search engines expect to see exactly the same content as shown …

LaxLoafer 71 Posting Whiz in Training

Search for SMS gateway.

You'll discover there are many services offering SMS messaging, which you can connect to through their APIs. I bet most of these will be documented for PHP.

Alternatively you can connect directly to a network with either dedicated hardware or just a mobile phone.

LaxLoafer 71 Posting Whiz in Training

Wow, angled tabs. Amazing.

Call me a luddite but when you have a dozen tabs open on a netbook, square tabs are the way to go! Rounded and angled tabs just waste valuable screen space.

LaxLoafer 71 Posting Whiz in Training

Smells like spam. The article appears to have been scraped from another site, with the addition of a link on the words 'brand new icons'. Please feel welcome to correct me if I'm wrong.

See: http://bgr.com/2016/01/31/google-chrome-material-design-update/
Is this the original? It seems to be copyright material. Did you obtain permission?

Please do give authors proper attribution if the work is not your own.

Please be aware of Daniweb's rules, specifically the posting of editorials already published on another site.

On a positive note, thank you for bringing the article to our attention - it's mostly relevant, and welcome to the forum :-)

LaxLoafer 71 Posting Whiz in Training

It should be noted that messing around with the registry can potentially cause your system to stop working properly. Before editing the registry it's normally advisable to create a backup.

But, as I learned in school for software: "Never mess with the registery unless you are totally sure what you are doing."

Yes, such advice is appropriate if you don't want people to learn. How can you become totally sure you know what you're doing without gaining practical experience?

Why teachers don't want students experimenting freely on school computers is understandable. But we're talking about your computer here, aren't we? I think you can safely handle this one. Changing the registered owner is about as trivial as registry editing gets.

mattyd commented: Not good -2
LaxLoafer 71 Posting Whiz in Training

Unfortunately I don't yet have access to Windows 10, but here are a couple of things you might want to try that worked for earlier versions...

To change the 'registered owner', open the registry editor (regedit) and navigate to
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion
Look for the 'RegisteredOwner' key.

To change an account name, instead of going through 'User Accounts' in the control panel, try opening a command prompt and typing control userpasswords2. With a bit of luck a more advanced user account dialog will appear.

LaxLoafer 71 Posting Whiz in Training

Hi squashspark, and welcome to DaniWeb.

To get at the data inside your Coverage node you could try selecting it with an XPath query, then selecting the nodes you want in the context of the current node. For example, in the code below line 4 selects a PAGE node, then line 7 uses the XPath query "./PAGENUMBER" to select a child node. Does this help?

    Dim xmlDoc As New XmlDocument
    Dim nodeList As XmlNodeList
    xmlDoc.Load("/Jobs_01_2016.xml")
    nodeList = xmlDoc.SelectNodes("/JOBS01_2016/JOB_01_09_2016_20_50_13/COVERAGE/PAGE")
    For Each pageNode As XmlNode In nodeList
        sb.AppendLine(pageNode.Name)
        sb.AppendLine(pageNode.SelectSingleNode("./PAGENUMBER").InnerText)
    Next

BTW, it's generally a good thing to keep code examples as short as possible. You'll find it helps to narrow down issues, and you're also more likely to get a quicker response.

LaxLoafer 71 Posting Whiz in Training

Rendering in the browser occurs after an image has downloaded. Usually it's the downloading of the image that takes time. If you wish to reduce the amount of image data transferred on your site here are a few things you can try:

  • Resample images to reduce their resolution (recommend 72 DPI).
  • Resample images to reduce their dimensions.
  • Try different file formats. Generally JPEG for photos, GIF for line drawings, PNG for both.
  • Try different levels of compression.
  • Reduced color palettes.
  • Try graphic file optimizers, e.g. JPEG optimizer, tiny png, and others.
  • Use a content delivery network (CDN).

For general performance advice see Yahoo's "Best Practices for Speeding Up Your Web Site". I haven't checked whether the article has been updated for HTTP 2.0 yet, which will make some recommendations perhaps less relevant when it's enabled on your server, but still worth a read.

diafol commented: Great +15
LaxLoafer 71 Posting Whiz in Training

I bought some natural links and...

Bought links are not 'natural'. Make sure they're tagged as 'nofollow' if you don't want to risk getting penalized by search engines.

LaxLoafer 71 Posting Whiz in Training

Unsure if this'll work but I'm attempting to block the upgrade on an old netbook by restricting permissions on the hidden folder that Microsoft will attempt to create for the download, which I believe is C:\$Windows.~BT

Any thoughts on a better way to permanently block this upgrade?

rubberman commented: Install Linux. Only updates are the ones YOU want! +13
LaxLoafer 71 Posting Whiz in Training

IIS 6 appears to use Negotiate/NTLM by default. If you want to disable Negotiate then I would try setting the NTAuthenticationProviders property in the metabase to "NTLM", instead of "Negotiate/NTLM" or undefined. I couldn't see this as an option within IIS 6 manager so I guess you'll need to use an admin script.

The following article details how to enable Negotiate/NTLM on IIS 6, but hope it provides enough clues for disabling it too: http://support.microsoft.com/en-us/kb/215383

Further reading: Integrated Windows Authentication (IIS 6.0)

LaxLoafer 71 Posting Whiz in Training

Does your computer meet the system requirements for windows 10?

LaxLoafer 71 Posting Whiz in Training

The quickest and easiest way to generate a PDF is to make use of an existing PDF library. Search the web for "PDF library for PHP" and you'll discover there a number of libraries for PHP out there, both paid and free. Pick one that suits your needs.

Policies can sometimes restrict what you're allowed to install on a server. If that's the case, using an online PDF service might be an option, assuming you're not working with sensitive data. Their APIs are normally language agnostic - whether you're using PHP or some other language shouldn't be an issue.

Creating PDFs from scratch using just PHP is the hard way and generally best avoided, but if you're interested in the format the PDF reference can be found here on Adobe's site.

LaxLoafer 71 Posting Whiz in Training

Looking at the results from rproffitt's query, the tenth one down seems promising: Windows Authentication with Chrome and IIS.

In summary, check that 'NTLM' appears before 'Negotiate' in the list of Windows authentication providers for your site. Open IIS manager and navigate to your site > IIS > Authentication > Windows Authentication, and then select 'providers...' from the Action panel.

LaxLoafer 71 Posting Whiz in Training

I believe you need to add the native DLL to the project in Solution Explorer. However selecting 'Include in Project' alone is not always enough. Depending on the filetype you may need to manually set the 'Copy to Output Directory' property. I don't know why Visual Studio behaves this way, perhaps it's a bug?

Setting the property within Solution Explorer can be a laborious task. If you ever need to set the property on multiple files you'll probably find it quicker to edit the project file directly.

Which version of VS are you using?

LaxLoafer 71 Posting Whiz in Training

Requests for https://malsup.github.com/jquery.form.js are being 301 redirected to http://malsup.github.io/jquery.form.js, hence the mixed content issue.

The resource is also available via HTTPS so you could possibly link directly to https://malsup.github.io/jquery.form.js, as long as you're confident it won't change.

LaxLoafer 71 Posting Whiz in Training

Try changing the event listener on line 7 to something like...

openCtrl.addEventListener( 'click', function(ev) {
   ev.preventDefault();
   if (classie.has(el, 'services--active')) {
         classie.remove(el, 'services--active');
    }
    else {
         classie.add( el, 'services--active' );
    }
  } );

You should find your open services button will now toggle. The event listener for the close services button, lines 12-15, can be removed.

LaxLoafer 71 Posting Whiz in Training

If you're relying on the HTTP referrer header to prevent hot linking there are a couple of issues you might need to think about. The header can be spoofed. And it's not uncommon for the referrer to be blank, such as when someone bookmarks a resource.

I haven't attempted to block hot linking myself, but what I would try doing is setting a domain cookie so that at least you know they've visited your site. Then when they request the download, their browser will include the cookie in the request header, which you can test against.

If you need to protect resources more thoroughly, consider implementing a way for users to authenticate themselves, such as with a username and password, and/or restricting access by IP address.

LaxLoafer 71 Posting Whiz in Training

... not to mention unit testing.

LaxLoafer 71 Posting Whiz in Training

I'm not aware of any closed or open source software that can manage to recognize handwriting with any reasonable degree of accuracy. Even clearly printed text can present a challenge for today's OCR engines, with none of them achieving 100% accuracy. Postal services have been using OCR for a number of years to recognize zip codes and automate sorting, but here we're talking about just a few characters of text and in known formats. Touch screen displays can also recognize text, but this is often one character at a time, and it's possible they may get some hints from the order in which characters are drawn. The way some letters can become overwritten would make it very difficult for a machine to read.

You might be able to improve on accuracy by using multiple OCR engines, as engines employ different techniques for recognizing characters, and post-processing the results with the help of dictionaries. If the subject of the text is narrow you might achieve better results with a customized dictionary.

I believe OCR is something that will get better with the use of neural networks, but it will be a long time before we see anything that can out perform a human eye and mind, which has the advantage of understanding the context in which something was written. Then again it might never happen - I wouldn't be surprised if handwriting becomes obsolete over the next decade.

LaxLoafer 71 Posting Whiz in Training

Have tried most of Mozilla's recommendations but nothing has worked so far.

Techno, it might be worth trying a few more. Millions and millions of people use Firefox so it's quite likely that someone else has encountered the same problem or something very similar.

Have a look at this one on Mozilla's support forum: http://support.mozilla.org/en-US/questions/921787 Please check the entire thread as it contains some info that's related to how mailto links work in Firefox.

I think my earlier troubleshooting advice is a little to involved for a novice, but it's something you might want to return to later on if you do get really stuck. Once you've identified the fault I suspect the fix will actually be quite simple. Keep trying :-)

LaxLoafer 71 Posting Whiz in Training

To create a new Windows user account, open a command prompt and type 'control userpasswords2'. The 'User Accounts' dialog box should appear. Select the Users tab and click the 'Add...' button. Alternatively you can find the dialog box through the control panel, or follow Microsoft's guidance: "Creating a user account"

Once you've created the user account you should be able to switch between them by logging off and then on. If fast user switching is enabled (not available on lesser editions of Windows), you can switch between accounts without having to log off first.

Creating a new user account will give you a fresh Windows user profile, and running Firefox will generate a new Firefox profile. If Firefox works as expected that would suggest your installation of Firefox is fine, and also the profile for the Windows user account.

As a next step you could try copying the old Firefox profile across to your new account. Note there will be some file permission issues when attempting to access the other user's account, so you'll need to copy the old profile to a shared location or grant the second account permissions to access the old profile folder.

To find the location of your Firefox profile open up the browser and navigate to 'Help > Troubleshooting Information' > Profile Folder.

You'll need to edit 'profile.ini' to tell Firefox where to find a copy of the old profile. The ini file is normally located in the user's AppData folder, …

LaxLoafer 71 Posting Whiz in Training

What happens if you try it from a newly created Windows user account?

LaxLoafer 71 Posting Whiz in Training

Have you tried setting the default email client? See: http://windows.microsoft.com/en-us/windows-vista/change-the-default-e-mail-program

LaxLoafer 71 Posting Whiz in Training

I'm unable to find any mention of SSI on your host's site. To test whether SSI is enabled you could try something like outputting the date - just insert <!--#echo var="DATE_LOCAL" --> into the body of your document.

LaxLoafer 71 Posting Whiz in Training

You might want to have a look at HTML Imports.

The directive you have used is a server-side include, which may need enabling on your web server in order to work. Note the 'file' or 'virtual' argument of the include directive should specify a path somewhere inside the web root directory. If you are hoping to share the file between sites I guess you might want to consider hard linking to or copying the file instead.

LaxLoafer 71 Posting Whiz in Training

If the file is corrupt you'll probably have a hard time trying to recover the contents. That's why it's important to keep backup copies :-)

In the unlikely event you don't have a backup, try looking for a temporary file. Sometimes applications will create such files while a document is open and delete them when finished. It's just possible a temporary file may still exist, especially if the fault that caused the corruption also terminated the application unexpectedly.

Which operating system are you using?

LaxLoafer 71 Posting Whiz in Training

You might want to check Google's help pages on Tracking across multiple domains to see if it applies to your case.

Google Analytics will allow you to monitor sites under your control.

Pages can be tracked by page title, but can you guarantee the title will be unique? What happens when different people sign up using identical names?

LaxLoafer 71 Posting Whiz in Training

Way too much violence on the screens these days. Bring back the A-team. Those guys fired thousands of rounds of ammunition and never hit anyone. Now that was entertainment!

LaxLoafer 71 Posting Whiz in Training

<misinformation>
In keeping with the current trend of using Roman numerals, Microsoft is pleased to announce the next generation of Windows will be named 'Windows X'. And this time there will be only two editions, Standard and Professional (abbreviated to S and P). The circle is now complete. There will be no more confusion. Anyone searching the web for Windows XP will find only the latest version.
</misinformation>

LaxLoafer 71 Posting Whiz in Training

Request denied.

I wish PI was an integer.

LaxLoafer 71 Posting Whiz in Training

The point at which you're calling the document.getElementById('imageTwo') the image element doesn't actually exist. As a result, the value of image will be null.

You need to call the function after the IMG tag has been created. You can do this either by placing a script block after the tag, or defining a body.onload function and placing it within. For example:

<html>
<head><title></title>
<script>
function myOnload() {
    image = document.getElementById("imageTwo");
}
function moveright() {
    image.style.left = (x += 5) + "px";
}
</script>
</head>
<body onload="myOnload()">
<img src="image.jpg" id="imageTwo">
<a href="#" onclick="moveright();">Move right</a>
</body>
</html>

Don't forget to include your CSS, otherwise style.left might be ignored.

LaxLoafer 71 Posting Whiz in Training

HTTPS helps to prevent cookie theft by MITM attacks. However if a site has an XSS vulnerability the cookies can still be stolen. And if that site relied solely on a session cookie for authentication then an attacker could gain access to your account without needing to login.

LaxLoafer 71 Posting Whiz in Training

Keystoke, in an earlier post you've stated you are a link seller. If you have fallen foul of Google's policy on link schemes you are going to find it extraordinarily difficult to get page rank with them again.

LaxLoafer 71 Posting Whiz in Training

I have two wifi enabled routers connected via a powerline ethernet too, similar to rubberman. Except that both routers are configured identically, same SSID, passwords etc. This appeared to be the recommended configuration when I first looked at setting it up. It works reasonable well, although the signal strength reported from one of the routers is weaker than I expect. It's possible there's some interference, or perhaps the signal strength is that of the more distant router, but the problem hasn't warranted further investigation. As the saying goes, if it ain't broke...

LaxLoafer 71 Posting Whiz in Training

Why place the link at the bottom of the post? Surely it'll get a higher CTR if placed at the top. Forcing visitors to scroll a page to view your link means it stands less chance of getting clicked. This is something you can easily experiment with and measure. What does your analytics software tell you?

LaxLoafer 71 Posting Whiz in Training

Creating an OCR engine from scratch demands a lot of effort. Instead of reinventing the wheel, why not make use of an existing OCR component? You'll find there are several available on the market for programmers and at least one should fit you needs. It'll save you months of work, if not years!

LaxLoafer 71 Posting Whiz in Training

Thinking about it. I'm still doubtful that HTTPS is suitable for everywhere and everything, as Google would like us to believe. For example we have some fairly hefty downloadables on our site - one of our products being roughly 50 MB in size. As caching proxy servers are generally unable to handle HTTPS traffic, clients would need to request this resource directly from our servers, and I'm not sure what impact this might have on site performance. We might try working around it by splitting the downloadables off to a separate domain, that we'd continue to serve using HTTP. One of Google's arguments for using HTTPS is that we should be ensuing what our clients receive is exactly the same as we are serving, but is it really an issue if our software is already signed?