Scheduled I-Share/Voyager Downtime on Sunday, September 18, from 6-10am

On Sunday, September 18, all I-Share/Voyager-related CARLI-supported systems will go offline for maintenance between 6:00 AM and 10:00 AM.
I-Share/Voyager services include: VuFind local catalogs, the I-Share union catalog, WebVoyage local catalogs, Z39.50, Voyager staff clients, and Voyager MS Access Reporting.  (Voyager Offline Backup Circ [] may be used during the downtime, if needed.)

Vu Find 3 Available for Testing

CARLI staff are pleased to announce that we now have VuFind 3 available for I-Share library staff testing and review. Please see the information below on how to access VuFind 3 and how to send us feedback about it. We look forward to hearing from you!
VuFind 3 is significantly different from VuFind 0.6 that is currently in use for I-Share libraries, and should be considered a new system, rather than an upgrade to the version of VuFind now in use with I-Share.

New I-Share Libraries: Searching & Requesting via VuFind & UB Training

Place: CARLI Office
100 Trade Centre Drive, Suite 303
Champaign, IL 61820
Directions and Parking

Trainer: Debbie Campbell, Library Services Coordinator, CARLI Office

Lunch: Provided

Intended audience: Staff at new I-Share libraries. Note: this is basic training covering the topics noted below.

Emergency Patching at 11PM Tonight (Feb 17, 2016)

There is a new glibc vulnerability (CVE-2015-7547) that affects most Linux systems. It may allow remote code execution on the servers, so I will patch and reboot all CARLI servers starting at 11PM tonight (Wednesday, Feb 17).

For most servers this will be a quick reboot. For Voyager, it will probably be down for about 20 minutes while the Oracle database server is rebooted.

Brandon Gant

Voyager Outage (Dec 4th, 2015)

We have a batch job that has been failing this week and we have a ticket open with Ex Libris. We did not realize that the failed jobs were generating 4GB core dumps. At 2:10AM this morning the core dumps pushed the disk volume to 100% capacity. At approximately 8:50AM we identified and removed the core dump files to restore service.

This failure did not generate any automated alerts, so we will need to revise our monitoring.

Brandon Gant

VuFind Outage this Morning (July 31)

The service stopped responding this morning at around 10:30AM. The Apache web server was restarted and the service was back online at 12:10AM. I apologize for the delay in getting the system online, but a few staff are on vacation today, so troubleshooting took longer than normal.

Voyager Maintenance this Sunday (July 19th)

Starting at 12:01AM this Sunday, July 19th the Voyager and Oracle servers will be brought down so that their data can be transferred to a different storage array. The data transfer should take at least 10 hours. I will send an update before 10AM Sunday if it looks like the transfer is taking longer than anticipated.

To avoid confusion, we will also take VuFind offline while Voyager is offline.

Brandon Gant

Sunday Server Patching (July 12th)

This Sunday morning starting at 12:01AM, Production servers will be patched and rebooted. All patching should be completed before 10AM Sunday, July 12th.

Voyager will be down from approximately 12:01AM to 12:30AM since the Oracle server needs to be patched and rebooted while Voyager is offline.

Brandon Gant

VuFind web server outages

There have been two VuFind outages that appear to be caused by the Apache web server going into an odd state. The first outage was Friday, June 26th at 9PM and the second was at around 5AM yesterday (Monday, July 6th). In both cases, no obvious cause was found and the issue was corrected by restarting Apache.

Voyager Slow this Afternoon (Wednesday, June 17)

As many of you probably noticed, the system was extremely slow this afternoon (Wednesday, June 17) starting at around 12:30PM. We have narrowed things down to a set of Course Reserve database queries that originated through VuFind from one member library. We killed those queries at 3PM and the system recovered immediately.

We do not know exactly what is causing this yet, but we now know what to look for and which queries to kill. We will need to do more analysis on the queries before we find a permanent solution.

Brandon Gant