Does anyone know how I can get specific content from another website and post it on my website?

In this case I want to get the content from http://www.gametracker.com/server_info/213.239.207.85:27960/

What I want is:

Name, Game, Address, port, status, clan, Server Manager, members, Current Players. Average and Game Server Rank.

I've read somewhere that I can use cUrl, but I don't know what I'm doing. I've also tried to find some examples. but found nothing. Even bad at searching I guess :)

Could a friendly soul out there please help me?

Can use wget for it please tell me this

Member Avatar for diafol

@Suraj - we need to keep all answers public here on the forum, so that others may benefit from any answers given.

Also, please state your question clearly, use wget for what? Show any code that you have or at least provide a context so that others may guide you.

The easy way.
If you need specific data I'm not really sure the best way to do that... I have used this for a project though, it's not bad and quite simple to use really if you need to fetch specific elements of a page.
Good luck!

You would need to have a fair idea of what the markup on the original site is like. If you do, or are willing to invest the time to find out you can get at what you want in a two step process.

  1. Use file_get_content to fetch the entire web page that has your content. Despite its name file_get_content can also fetch the contents from a URL.
  2. Now parse the data that are returned using the Simple HTML DOM parser.

You cannot afford to simpy write this code once and then keep using it. Changes in the markup on the external site are liable to break your code. I did something similar for a rather different requirement and ended up coding in an email alert to myself to let me know when things stopped working.

Member Avatar for diafol

Check to see if they have an API. APIs usually return data in json (or other) formats. You can use file_get_contents() or if this is not allowed in your remote php.ini, you can use cURL.

If there isn't an API, then you'll most likely be retrieving the html, which you'll need to navigate in order to extract the data you need. That's easier said than done. As jrewing states, changes to the DOM by the site admins will probably cause your beautifully crafted extraction functions to fail.

Scraping sites in this way is pretty fashionable, but be aware, some sites take exception to it.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.