Ryujin 27 Newbie Poster

Okay, threw out the CSS and started from scratch with the stock CSS of the Strongly Typed template. Guess what? Carousel works at all resolutions under that. So now am selectively restoring chunks of the old custom CSS.
In retrospect, I wish I'd done that from the start, instead of trying to fix what was broken--this is turning out to be much easier & faster.

rproffitt commented: Similar to trying to touch up a bad paint job. It never looks right. +14
Ryujin 27 Newbie Poster

Inherited this page, a mashup of an HTML5Up template called Strongly Typed trying to work inside our content management system, which runs on Bootstrap. Though we cannot edit the stylesheets bundled with our CMS, we can override via inline declarations & custom page-level CSS; the page in question has the template's CSS & JS transplanted in.
At a viewport width of 736px the two-column layout slims to one column, but at the same time the photo carousel "disappears." Or it seems to: in browser's developer console I see that what's really happening is the box balloons in size to a ridiculous width & height, 1,000s and 1,000s of pixels!
Goal is simply to get it functioning ASAP--that is, the carousel properly scaling down (as it does on this page).
I'm wondering if you know of some tools in either the Chrome or the Firefox console that are ideal for figuring out the culprit conflicts/redundancies in a hodgepodge like this?
Thanks for your help or advice!

Ryujin 27 Newbie Poster

Suddenly (after no known changes to the page) i see it failing to load in a couple browsers, both of them ones that previously loaded it w/o problems!
http://www.kutztown.edu/library/indexoc.asp

**Problematic browsers:**   ||  **No problems:**
Chrome Canary 41 on a PC    || IE 10 on same PC
Stock browser on an Androd  || Maxthon browser on same Android
                            || Chrome 39 on a different PC
                            || Safari on an iPhone 
                            || Safari on an iPad
                            || Chrome [v.??] on iPad

Thank you kindly in advance for any thoughts, ideas, hints~~

Ryujin 27 Newbie Poster

It turns out that the necessary port is closed. Response from Bluehost:

You cannot retrieve information from http://pilot.passhe.edu:8042/ Port 8042 is closed on your server. Running curl http://pilot.passhe.edu:8042/ on a server with it opened retrieves information. You have to log into cpanel and purchase a dedicated IP and contact us back to enable port 8042.

They have a knowlegebase article explaining why they block ports on shared IP, and how to get a dedicated one--basically a surcharge of abt $3/month.

Thank you again folks!

Ryujin 27 Newbie Poster

@diafol, that's a thought but no, since the few calls i've tried all failed from the beginning.

Have opened a ticket with Bluehost. @cereal, it's interesting because calls to other .edu sites do work; don't know if odd results here explain anything (page takes a long time to render as it waits for last two calls to time out).

i'll leave this discussion open till i can report what Bluehost says. But cereal, i'm in awe of the work you did to sort this out. When i hear from Bluehost (and hopefully what i'll hear is that they fixed it!) i will mark the question 'solved.'

cereal commented: thanks! +13
Ryujin 27 Newbie Poster

Ah, so as diafol suspected, then, there's something that prevents my server from accessing the Pilot one. (Right?)

Nothing relevant in the PHP error log. Here's the output of the test you suggested, cereal:

Warning: get_headers(http://pilot.passhe.edu:8042/cgi-bin/Pwebrecon.cgi) [function.get-headers]: failed to open stream: Connection timed out in /kutzutil/DDCweed/testHeaders.php on line 9

Also when i turn on PHP error reporting & try to run the scripts on Pilot, gets only "couldn't connect to host" errors.

Thank you guys for patiently & rationally diagnosing this. What i still don't understand is why there'd be such an 'incompatibility' between those servers but not others.

Ryujin 27 Newbie Poster

Brilliant of you to put it on Runnable, thank you so much.
When it still failed on my Bluehost server i did phpinfo() on both.
Runnable: PHP Version - 5.4.9-4ubuntu2.3; cURL Information - 7.29.0
Bluehost: PHP Version - 5.2.17; cURL Information - libcurl/7.24.0

So i'm guessing this explains the difference?

Ryujin 27 Newbie Poster

That is absolutely awesome, cereal. Beautifully done. Thank you!
I'm eager to get home & start working with it.

Ryujin 27 Newbie Poster

Thanks, @cereal -- Can you link to a tiny working example, that pulls from //pilot...? Because, for the life of me, I'm just not seeing it...

Ryujin 27 Newbie Poster

Hi, no it hasn't.
In briefest of terms: what I'm asking is, what is there about http://pilot.passhe.edu:8042/ ... that makes it different from other sites that cURL readily scrapes? (Or are you guys saying that you were actually able to return something from http://pilot.passhe.edu:8042/ ?)

I'm so sorry for confusing the issue by starting from search results: anything there, even the bare initial search page itself, returns zero, as far as i can see! (Nonetheless, I have learned from your explicative code examples.)
Thanks ~

Ryujin 27 Newbie Poster

Thanks again folks. Diafol, now I'm feeling extra-clueless because the script you posted and said, Seemed to work for me, it doesn't work for me unless I switch out the pilot.passhe.edu URL for something else. (Then, it works.) Is there a test page where you can show me that in action with Pilot?

Ryujin 27 Newbie Poster

Thank you for that, but there is something else going on. Note that the same script works with one catalog, but not with another. Here is a demo; it makes use of the less accurate catalog.

Boiled down to the bones, with no search strings: The scripts below are identical, but the 1st brings back nothing while the 2nd retrieves the target page.

The code below is at this page:

<?php    

    function curl($url) {

        $options = Array(
            CURLOPT_RETURNTRANSFER => TRUE,   
            CURLOPT_FOLLOWLOCATION => TRUE,   
            CURLOPT_AUTOREFERER => TRUE,  
            CURLOPT_CONNECTTIMEOUT => 120,    
            CURLOPT_TIMEOUT => 120,   
            CURLOPT_MAXREDIRS => 10,  
            CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8",  // Setting the useragent
            CURLOPT_URL => $url,  
        );

        $ch = curl_init();   
        curl_setopt_array($ch, $options);   
        $data = curl_exec($ch);  
        curl_close($ch);     
        return $data;    
    }

       $url = "http://pilot.passhe.edu:8042/cgi-bin/Pwebrecon.cgi?DB=local&PAGE=First";   
     $results_page = curl($url); 

     echo $results_page;     

    ?>

The code below is at this page:

<?php    

function curl($url) {

        $options = Array(
            CURLOPT_RETURNTRANSFER => TRUE, 
            CURLOPT_FOLLOWLOCATION => TRUE, 
            CURLOPT_AUTOREFERER => TRUE, 
            CURLOPT_CONNECTTIMEOUT => 120,  
            CURLOPT_TIMEOUT => 120,  
            CURLOPT_MAXREDIRS => 10,  
            CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8",  
            CURLOPT_URL => $url, 
        );

        $ch = curl_init();  
        curl_setopt_array($ch, $options);  
        $data = curl_exec($ch); 
        curl_close($ch);   
        return $data;   
    }

     $url = "https://vf-kutz.klnpa.org/vufind/Search/Advanced";  
    $results_page = curl($url);  

     echo $results_page;         

    ?>

cereal, I do appreciate your pointing out my error with $curl_data, which was very instructive for me. I implemented the fix you showed me but to no avail. The questions remain: What is …

Ryujin 27 Newbie Poster

Greetings. Trying to scrape data from search results in a library catalog, but cannot return anything at all. The same script below works fine pulling from another catalog, but not with this one. (It's a Voyager catalog by ExLibris, in case that helps.)

Below for simplicity is a boiled-down version of the script, with all scraping functions removed. The script runs on this page.

As you might already know, lots of library catalogs generate session URLs. But that is not the issue in this case. The script won't even scrape the URL of the catalog's 'home page,' the first link above.

Is there a way to diagnose what the catalog server is sending that prevents returning its HTML? And then to properly set a CURLOPT to overcome that?

Thank you for your thoughts!

<?php    
    function curl($url) {
         $options = Array(
            CURLOPT_RETURNTRANSFER  => TRUE,   
            CURLOPT_FOLLOWLOCATION  => TRUE,   
            CURLOPT_AUTOREFERER     => TRUE,  
            CURLOPT_CONNECTTIMEOUT  => 90,    
            CURLOPT_TIMEOUT         => 90,   
            CURLOPT_MAXREDIRS       => 10,  
            CURLOPT_URL             => $url,  
            CURLOPT_HEADER         => false,         
            CURLOPT_ENCODING       => "",            
            CURLOPT_USERAGENT      => "'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13')",    
            CURLOPT_POST           => 1,           
            CURLOPT_POSTFIELDS     => $curl_data,     
            CURLOPT_SSL_VERIFYHOST => 0,             
            CURLOPT_SSL_VERIFYPEER => false,     
            CURLOPT_VERBOSE        => 1             
        );

        $ch = curl_init();   
        curl_setopt_array($ch, $options);    
        $data = curl_exec($ch);  
        curl_close($ch);    
        return $data;    
    }
        //SETS UP A (STABLE) URL OF A SEARCH RESULTS PAGE:
        $DDCnumber = 873;
        $url = "http://pilot.passhe.edu:8042/cgi-bin/Pwebrecon.cgi?DB=local&CNT=90&Search_Arg=" . $DDCnumber . "&Search_Code=CALL%2B&submit.x=23&submit.y=23"; 
          echo "The URL we'd like to scrape is " . $url . "<br />";      
        $results_page = curl($url); …
Ryujin 27 Newbie Poster

We are pulling the events. What we don't understand is how to fix the 5-hour offset.

If user sees the event we pull right now (Sunday at half-past six in the morning) it's fine, it shows our hours for Sunday. Same deal at noon. And at 4pm, and so on--up until 7pm, when it will instead show the hours for Monday.

Thanks.

Ryujin 27 Newbie Poster

We list our daily opening/closing hours on a Google Calendar as Events. After some struggles with the Javascript API, we are almost able to extract the hours (events) for today and for tomorrow to embed elsewhere on the site -- but at 7pm local time (EST in the US) it kicks over to the next days' listings.

That seems to be 0:00 GMT, when it switches, yet the calendar itself displays as it should for our time zone. For some reason I'm too dumb to find, the script is pulling the events in a way that almost does does but doesn't quite correspond with the calendar.

Any ideas on how to solve this? (I'm open to even inelegant workarounds.)

Not sure, but suspect the relevant script chunk is below:

/* loop through each event in the feed */
  var len = entries.length;
  for (var i = 0; i < len; i++) {
    var entry = entries[i];
    var title = entry.getTitle().getText();
    var startDateTime = null;
    var startJSDate = null;
    var times = entry.getTimes();
    if (times.length > 0) {
startDateTime = times[0].getStartTime();
      startJSDate = startDateTime.getDate();
    }
    var dateString = (startJSDate.getMonth() + 1) + "/" + startJSDate.getDate();
if (!startDateTime.isDateOnly()) {
  dateString += " " + startJSDate.getHours() + ":" + 
      padNumber(startJSDate.getMinutes());
      }

Thanks very much--

Ryujin 27 Newbie Poster

Forgot to say--we're coding in C#

Ryujin 27 Newbie Poster

We're looking for a way to enable users to send us a file using one of our web pages. Cannot use MailAttachment because page doesn't have rights to write files to a directory (we can't change that) so are trying a Google Data API to upload to our Google Docs account from within an .asp or .aspx page.

We also can't put files in the actual /bin folder, so we put the three needed assemblies (Google.GData.Documents.dll, Google.GData.Client.dll, and Google.GData.Extensions.dll) in a folder we can access.

So far, so good--but now we can't figure out how to properly load these assemblies.

Following this documentation, for example, we get compilation errors like, CS1519: Invalid token '=' in class, struct, or interface member declaration:

Line 12: Assembly foo;
Line 13: foo = Assembly.LoadFrom("/ourdirectory/bin/Google.GData.Documents.dll");

Having never tried to invoke assemblies before, I suspect we're missing something really basic here. Does someone with more experience have a suggestion? Thank you very much.

Ryujin 27 Newbie Poster

Embarrassed to say how many hours I've burned on this--am hoping someone can point me down the right path. This is more a conceptual question than a straight coding one.

Building a web app to enable people to reserve stuff where I work. One of the pages uses a date range version--the user picks an equipment category and the resulting call to the database fills the page with list of items, each of which has the twin datepickers for first and last date needed.

I naïvely thought it would make sense to query the reservations table of the database in order to block out appropriate dates for items already reserved; now woe is me. Can block out dates if they are hard-coded into the array, but can't get anything dynamically generated to work. The perfectly formatted arrays show up in View Source, with a mix of dates both typed in and selected from the database, but the only ones marked on the calendars are the ones that I typed in manually.

Each text field for the datepickers gets a unique name & id. After populating the arrays using sql queries, which failed, I thought that maybe it would make a difference if I put up a couple intermediary pages for pre-processing and passed the readymade data directly via session variables. That didn't help. It only made me older.

These are ASP pages on an IIS server (v 1.1). After trying a bunch of good-looking …

Ryujin 27 Newbie Poster

Folks, I'm part of a team of advisors at a school. Students go to a central office to physically sign up to see us. The office is far from most of us, so we all need to make a once-daily trip over there to learn who's coming and when. The system is inconvenient for the students, too.

I have access to (but tiny experience with) Visual Studio 2008 and 2010 Express, and some programming in C. We have an IIS server. In my reading so far on Visual Studio, it looks as if (but of course I don't know for sure, is why I'm asking) it would be doable to build an online signup form that:

  • Displays open time slots to the student in a date/time grid
  • Enables advisor to "block out" times when unavailable
  • Requires student to sign in to make or change appointment (LDAP authentication would be ideal but I suppose too complex for a quick-&-dirty solution)
  • Instantly (via DB connection) blocks out time on the interface when an appointment is made, but does not show student who made the appointment (unless s/he did).
  • Does show the advisor, and the advisor only, the names attached to the appointments

I understand it could even be possible to tie an advisor's signup page to his/her Outlook calendar--but again, I don't have an experienced sense of how complex the various pieces mentioned would be to implement.

Can anyone give me an idea of the scale of …

Ryujin 27 Newbie Poster

Thanks very much. I've cleaned up most of the redundant 'corner' bits, but but my basic question is the same as Dani's: I can't understand why both instances of that jquery.js line are needed.

Re: tinymark's question, HoverIntent is in there because users found the hair-trigger dropdowns annoying (see http://www.daniweb.com/web-development/javascript-dhtml-ajax/threads/331910 )

In case it makes a difference, Lines 1-16 above are in a header file (an include).

I appreciate your help!

Ryujin 27 Newbie Poster

This page is very heavy. I see that the jQuery library accounts 164K which is almost 40% of the page weight. As can be seen in the source, /jquery.js loads twice, as do several other scripts! But if i remove /jquery.js library from either line, the page breaks (tabbed box does not come together).

The page is here.

I am weak on understanding the ordering of dependencies. Are there any ideas that jump out at you about how script and css calls could be reordered so that the big library would need to be called only once? Thank you!

Ryujin 27 Newbie Poster

After much hammering and pounding--not all of it by me--here is what looks to be a good working version. Thank you Airshow for your help in getting it there.

Ryujin 27 Newbie Poster

Ha! Unfortunate name aside, happily it turns out to be just what the doctor ordered, this hoverIntent.

It's not a simple delay, but something more sophisticated: it gauges mouse deceleration, so that if my cursor happens to brush up against/across the menu it does not drop down--not unless I linger there momentarily.

The sensitivity can be adjusted, too. Some have commented that it can make a menu seem "sluggish," but I don't see that...in testing our users said they were annoyed when the menu responded too readily.

Ryujin 27 Newbie Poster

Hello -- I am going to try something interesting I just stumbled across: a plugin called hoverIntent that seems to have been built with precisely this situation in mind. I'll let you know what happens--

Ryujin 27 Newbie Poster

Hi Airshow, thank you again.

Yes, that's right: one search will go to the library catalogue (which is hosted on an external server to the library's site) and the other to a 'federated search' tool (ditto).

As it happens, when we try to offer both those searches even from separate forms, users are endlessly confused about what they see so that mediation screen asking for a choice is a key to helping them make sense of their results.

The doubled search from a single form is similar to what sites like Twingine do, with the important difference that I don't need to process nor display search results: instead, must gather the user's search term, show the prompt screen, dispatch the term to the proper search tool(s) based on her choice, and be done with it.

So the puzzle of how to kick off dual searches at one go is the main concern. The possibility of having the prompt screen show the search term would be brilliant, but not essential--yet I haven't succeeded at even that. (As it now stands, browsers that show a term--i.e. not Chrome--are one screen behind. When first called the prompt screen shows an empty value, then subsequently shows the previous search term.)

Ryujin 27 Newbie Poster

Well. I'm truly taking a pounding here.

Below is what's behind the Both button that follows a search at this page. Airshow, is it anywhere near what you mean by an Ajax call to submit one of the forms?

The code worked in isolation for sending+retrieving data from a PHP page. When I put it in this context, though, nothing--all that the below does is to call the other search ('catsearch,' on the hidden form).

$( this).dialog( "close" );  
  
function getXmlHttpRequestObject() {
	if (window.XMLHttpRequest) {
		return new XMLHttpRequest(); //Not IE
	} else if(window.ActiveXObject) {
		return new ActiveXObject("Microsoft.XMLHTTP"); //IE
	} else {
		alert("Your browser doesn't support the XmlHttpRequest object.");
	}
 }

var DBendSearch = getXmlHttpRequestObject(); 

   function dbaseSearch() {                              
		   if ((DBendSearch.readyState == 0) || (DBendSearch.readyState == 4))
//I confirmed that the variable exists; its readyState turns out to be 0 at this point
	   {
	 	DBendSearch.open("GET",'http://scholar.google.com/scholar',true);
	 { 			 
	 	DBendSearch.send ();    }     }    }
 
	onclick="dbaseSearch()";   
	document.catsearch.Search_Arg.value = document.forms.searchForm.q.value; 				
		 	document.catsearch.submit(); 
							 			        
 },
[I]
// The other two buttons follow...[/I]
Ryujin 27 Newbie Poster

Thanks much Airshow for the cogent and instructive explanation. I did try your suggestion, Sunwebsite, but later on the most recent post explained why it doesn't work. This is a case where what seems to make sense, doesn't, and my little knowledge thus becomes a dangerous thing.

You've inspired me to drink more deeply from that Pierian Ajax spring. In trying to wrap my feeble skull around the XMLHttpRequest object I got hold of Peachpit's Javascript & Ajax book plus a lot of web tutorials. To pathetically little avail thus far.

If I'm understanding this properly, it seems I need a backstop page to process the value entered in the text box. I've done so with this version of the page.

What tangles me up is in what happens next. The searchform page must receive the value from the PHP backstop page, and somehow plug this value in to the hidden form as well as to the alert screen ("Your search for [VALUE] will...)

Correct me if I'm wrong: The Both button on the alert screen then initiates an Ajax request, instead of trying (impossibly) to trigger submission of both forms through JavaScript? Airshow, your list of three possible scenarios is crystal clear; I don't know what is holding me up from making it happen...

Ryujin 27 Newbie Poster

Thank you Airshow -- I will fuss with this over the holiday and let you know of any progress (the new script fails to open the dropdowns at all, but I think i'm seeing the direction you point me toward). Enjoy the holiday--

Ryujin 27 Newbie Poster

I wrote this script that enables user to send a search to a library catalog and/or a group of article databases. It works roughly as hoped in IE and in Firefox, but in Safari and in Chrome it refuses to send two searches simultaneously (the third button on the prompt screen).

In Safari & Chrome it will always send the search that is on the lowest line, but never both at once as it's supposed to.

Here is a chunk of the script that I've been fooling with; do you see any way I could get it to trigger both the document.searchForm.submit() and the document.catsearch.submit() ?

"Both": function() {
					$( this ).dialog( "close" );
					document.catsearch.Search_Arg.value = searchterm;
					document.catsearch.submit() ;
					document.searchForm.submit();
					
                         if ((is_chrome == true) || (is_safari == true))
						
				{       document.searchForm.submit();
					document.catsearch.submit(); } }
						 } }    });
Ryujin 27 Newbie Poster

I think it's working.

To find out, try a longer timeout. At 200 milliseconds it will be barely noticable.

Airshow

Hi Airshow--That line controls how long the dropdown remains visible after mouseout. What I'm wondering is if there might be a way to delay its appearance in the first place--folks are complaining that the dropdowns get in the way when they move the mouse up just a bit too high.

Ryujin 27 Newbie Poster

Hi all.

I'm trying to add a slight delay to this page's drop-down menu. Though I assume that entails a call to setTimeout() somehow, I've tried all I can think of (which isn't much...) and am clueless. Thank you for any tips!

<script type="text/javascript">
 var timeout    = 200;
var closetimer = 0;
var ddmenuitem = 0;

function jsddm_open()
{  jsddm_canceltimer();
  jsddm_close();
   ddmenuitem = $(this).find('ul').css('visibility', 'visible');}
   
function jsddm_close()
{  if(ddmenuitem) ddmenuitem.css('visibility', 'hidden');}

function jsddm_timer()
{  closetimer = window.setTimeout(jsddm_close, timeout);}

function jsddm_canceltimer()
{  if(closetimer)
   {  window.clearTimeout(closetimer);
      closetimer = null;}}

$(document).ready(function()
{  $('#jsddm > li').bind('mouseover', jsddm_open)
   $('#jsddm > li').bind('mouseout',  jsddm_timer)});

document.onclick = jsddm_close;
     
    </script>
Ryujin 27 Newbie Poster

We have an ultra-simple tool for library database users. It adds a prefix to a given URL to create a link that works with our library's proxy server.

Our fear is that some users will inadvertently give it URLs that already have the prefix, as a few of our research databases do supply those. So in such a case we'd like the script to detect the presence of that prefix and return the given URL without adding anything.

Sounds simple but I don't have a clue! I've tried to get started using the C# substring method, with something like

(permalink.Substring(0, 46))

to create a variable that would be compared, using the String.Equals method, to the standard prefix; if they match, then the script would just write back the entered URL without prepending the prefix string.

My clumsy efforts at extracting that substring, though, have gotten me nothing more than a bunch of Object Required: "whatever's entered" errors.

The script is below. Thanks in advance for any pointers...

<%
permalink=Request.QueryString("permalink")
If permalink<>"" Then
     Response.Write("http://navigator-kutztown.passhe.edu/login?url=" & permalink)     
End If
%>
Ryujin 27 Newbie Poster

Sadly and mysteriously (to me, anyway) that script somehow disables the counting entirely.

Essential's solution works in Firefox, Chrome, and Safari; the issues it has in Internet Exploder are a puzzlement but I've placed an onload alert on that test page to warn IE users.

Even though it doesn't work perfectly in IE, this tool is going to be a real aid for our service. I'm marking this "Solved" and after I add proper credit to Simon and Essential I will publish and share the tool with the other participating librarians across the US and Canada. Thanks to you both.

Ryujin 27 Newbie Poster

Essential, when I got home this evening I could see your form in other browsers: even though it doesn't count the "/" double in IE, it does do so perfectly in Firefox and Safari.

Ryujin 27 Newbie Poster

Thank you both for your efforts. Aaargh!

I can't understand why not, but neither solution works to double-count the slash, I'm afraid.

Essential, yours has an additional quirk in that it doesn't begin counting till the 2nd character--not a big deal, by itself. Anyway I've posted your work here. I will study it some more.

Simon, you are absolutely right of course about the redundant lines. So it all boils down to the below--which at least does the basic count properly, though for some reason it doesn't count the slashes twice. :-/

function CheckFieldLength(fn,wn,rn,mc) {
    var str = fn.value;  // This needs to be whatever's in the text box
    var pos = str.IndexOf("/");
    var len = fn.length;

    if (pos >= 0) {  
        len = len + str.match(/\//g).length;
    }

    document.getElementById(wn).innerHTML = len;
    document.getElementById(rn).innerHTML = mc - len;
    }
Ryujin 27 Newbie Poster

You're quite right, Simon: your single line

var len = fn.value.length + fn.value.match(/\//g).length;

does precisely what my awkward two lines did.

While this counts perfectly once a '/' enters the textbox, it unfortunately does no counting till it sees one.

Since the original line

var len = fn.value.length;

by itself counts perfectly when the box has no '/' (i.e. it doesn't know to double for the forward slash), it makes me wonder if an if - else statement might work? Something (that I am clueless how to properly write) sort of like this:

var str= value  // This needs to be whatever's in the text box
var pos=str.IndexOf("\/")
if (pos>=0)
{
var len = fn.value.length + fn.value.match(/\//g).length;
} 
else 
{
var len = fn.value.length
}

Could something like this possibly work? If so, my problem is to get it to read whatever is entered in the box into str. Users will often enter text that has no URL or, most often, a URL at the very end, so they'll want the counter to be keeping a tally from the start even if it doesn't see a slash.

Ryujin 27 Newbie Poster

Now I see more clearly, Simon: your line of code does work, even with the escaped slash: however, the catch is that the script counts nothing until it sees the character that's in your code. In other words, if the string in the textbox lacks a forward slash, the counter does nothing.

Ryujin 27 Newbie Poster

...there is something in it that breaks the script, instead of escaping the '/'

When I put your line in with a letter, instead of the fwd slash and the escape backslash, it works perfectly: if z is in there it counts double when a z is typed in the text box. Thank you for showing me that; now I'm stuck on trying to make that troublesome slash work.

Ryujin 27 Newbie Poster

Greetings!
I'm trying to make this tool for some fellow librarians who answer questions via text messaging. As you see beneath that page's text box, the catch is that the PC-to-SMS client treats the forward slash as two characters.

I'd like if possible for the JavaScript character counter to recognize slashes and automatically add 'two' to the character count, instead of just one, for each forward slash.

(BTW the reason for 155 instead of 160 is that the client adds a shortcode to our outgoing texts.)

Thanks very much for any help--