Hi Friends,
I need to some help in connecting to webserver using c shell or tcl scripting in linux ie in my office in linux system we have some intranet which is run by tomcat server so i need to connect to that server and read some data existing in that server

So how should i connect to that tomcat/webserver using shell scripting in cshell/tcl anyone would be great.

Thanks
Srikanth M.

Hello Srikanth!

This should be pretty easy in csh, depending on what utilities you have installed. Do you have 'wget', 'netcat' (nc) or 'curl' available in the shell?

Hi Gromit,
Thanks for your reply, i have all the applications on my linux system but those applications are used to download the files from the webserver but i need to establish connection with the webserver and do operations like read the contents of specific directory and whether the file or directory exits in the particular directory like this , So is there any application which can satisfy my needs

Thanks in advance
Srikanth M.

Hello Srikanth!

This should be pretty easy in csh, depending on what utilities you have installed. Do you have 'wget', 'netcat' (nc) or 'curl' available in the shell?

Easy!

If you want to check individual files, take a look at "wget --spider" or "curl -I"

(Just in case you don't already know, you can get a list of options for most commands with the 'man' command. Example: man curl )

Here's an example using curl to see if the google logo exists:

## Example where the file exists
# curl -I http://www.google.com/images/srpr/logo3w.png
HTTP/1.1 200 OK
Content-Type: image/png
Content-Length: 7007
Last-Modified: Fri, 05 Aug 2011 02:40:26 GMT
Date: Sat, 10 Mar 2012 17:41:50 GMT
Expires: Sat, 10 Mar 2012 17:41:50 GMT
Cache-Control: private, max-age=31536000
X-Content-Type-Options: nosniff
Server: sffe
X-XSS-Protection: 1; mode=block

## Example where the file does NOT exist
# curl -I http://www.google.com/images/srpr/logo3wx.png
HTTP/1.1 404 Not Found
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff
Date: Sat, 10 Mar 2012 17:46:56 GMT
Server: sffe
Content-Length: 954
X-XSS-Protection: 1; mode=block

It gives you the type of file, modification times, etc. without actually downloading the file.

Here's a wget example. The output is much simpler. It just tells you if it exists:

## File exists (200 OK)
# wget -nv --spider http://www.google.com/images/srpr/logo3w.png
2012-03-10 11:45:44 URL: http://www.google.com/images/srpr/logo3w.png 200 OK

## File does NOT exist
# wget -nv --spider http://www.google.com/images/srpr/logo3wx.png
http://www.google.com/images/srpr/logo3wx.png:
Remote file does not exist -- broken link!!!

If you want to read the contents of an entire directory, you can do that only if that directory allows 'directory listing'. If you can browse to that directory in a web browser and see a list of files, then it's good!

Here's an example using the directory listing of ibiblio's pub/linux archive to see if 'robots.txt' exists:

# wget -qO- http://distro.ibiblio.org/pub/linux/distributions/ | grep robots

    <tr><td class="n"><a href="robots.txt">robots.txt</a></td><td class="m">2011-May-05 19:14:55</td><td class="s">0K&nbsp;</td><td class="t">text/plain</td></tr>

You can parse that output however you want to find what you need.

If your web server does NOT allow directory listing, or there is an index.html(.php, .htm, etc.) in place, then you will have to check for each file individually, using something like the first example.

I hope this helps!

Hi Gromit,
I just want to get the list of directories present in some path ex : "http://intranet/books" , so in this path there are some directories i just want to get a list of directories present under it.

Thanks
Srikanth M.

Easy!

If you want to check individual files, take a look at "wget --spider" or "curl -I"

(Just in case you don't already know, you can get a list of options for most commands with the 'man' command. Example: man curl )

Here's an example using curl to see if the google logo exists:

## Example where the file exists
# curl -I http://www.google.com/images/srpr/logo3w.png
HTTP/1.1 200 OK
Content-Type: image/png
Content-Length: 7007
Last-Modified: Fri, 05 Aug 2011 02:40:26 GMT
Date: Sat, 10 Mar 2012 17:41:50 GMT
Expires: Sat, 10 Mar 2012 17:41:50 GMT
Cache-Control: private, max-age=31536000
X-Content-Type-Options: nosniff
Server: sffe
X-XSS-Protection: 1; mode=block

## Example where the file does NOT exist
# curl -I http://www.google.com/images/srpr/logo3wx.png
HTTP/1.1 404 Not Found
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff
Date: Sat, 10 Mar 2012 17:46:56 GMT
Server: sffe
Content-Length: 954
X-XSS-Protection: 1; mode=block

It gives you the type of file, modification times, etc. without actually downloading the file.

Here's a wget example. The output is much simpler. It just tells you if it exists:

## File exists (200 OK)
# wget -nv --spider http://www.google.com/images/srpr/logo3w.png
2012-03-10 11:45:44 URL: http://www.google.com/images/srpr/logo3w.png 200 OK

## File does NOT exist
# wget -nv --spider http://www.google.com/images/srpr/logo3wx.png
http://www.google.com/images/srpr/logo3wx.png:
Remote file does not exist -- broken link!!!

If you want to read the contents of an entire directory, you can do that only if that directory allows 'directory listing'. If you can browse to that directory in a web browser and see a list of files, then it's good!

Here's an example using the directory listing of ibiblio's pub/linux archive to see if 'robots.txt' exists:

# wget -qO- http://distro.ibiblio.org/pub/linux/distributions/ | grep robots

    <tr><td class="n"><a href="robots.txt">robots.txt</a></td><td class="m">2011-May-05 19:14:55</td><td class="s">0K&nbsp;</td><td class="t">text/plain</td></tr>

You can parse that output however you want to find what you need.

If your web server does NOT allow directory listing, or there is an index.html(.php, .htm, etc.) in place, then you will have to check for each file individually, using something like the first example.

I hope this helps!

Hi!

That's not a problem, as long as your web server allows directory listing (Options Indexes), and there is no 'index' file there. OR, alternately, if this is a web server that is under your control you could CREATE an index page that lists the files your concerned about.

Otherwise, if there is no way to get the directory list, say, in a web browser, then you're not going to be able to get it with any of the tools that I've listed either.

I hope this helps!

Hi Gromit,
Thanks for the reply, but the webserver is not in mu control and even it doesn't allow any directory listing so i am checking for any other alternatives which might fulfill my need.

Thanks
Srikanth M.

Hi!

That's not a problem, as long as your web server allows directory listing (Options Indexes), and there is no 'index' file there. OR, alternately, if this is a web server that is under your control you could CREATE an index page that lists the files your concerned about.

Otherwise, if there is no way to get the directory list, say, in a web browser, then you're not going to be able to get it with any of the tools that I've listed either.

I hope this helps!

I see. In that case, you'll probably need to check each file individually. There's no way to discover files/directories on a web server unless they're linked from an index page, or you know the full path already.

Hi Gromit,
Thanks for your reply, i know the complete path and i just want the list of directories and files present in that directory.


Thanks
Srikanth M.

I see. In that case, you'll probably need to check each file individually. There's no way to discover files/directories on a web server unless they're linked from an index page, or you know the full path already.

Sorry, I meant to path to each individual file. You can't get a directory listing unless the web server allows it.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.