System Status

CARLI website scheduled downtime: Wednesday, October 21, 2015, 10pm-midnight

UIUC networking staff will perform maintenance this week affecting CARLI services. Please notify your library staff working Wednesday night and Friday morning.

Network Issues Friday (10/02) and Saturday (10/03)

The CARLI servers are in a Data Center on the University of Illinois at Urbana-Champaign (UIUC) campus. The UIUC campus firewalls automatically download the latest intrusion detection signatures and one of those signatures from the vendor was bad. At 9AM Friday the firewalls crashed and automatically restarted. A firewall restart drops all connections to our Voyager server (it probably has no effect on web traffic). At 3PM Friday, the firewalls crashed and restarted again, but now they were in a bad state that also slowed down network performance.

SFX Outage (Sept 11)

The Production SFX server (sfx.carli.illinois.edu) went offline at 10:41PM tonight. CARLI staff have taken the server down to apply software updates and expect service to be restored by midnight.

Our normal outage window is Midnight to 10AM on Sunday mornings with prior notification to our customers. This outage was not pre-approved, we will work with staff to prevent these type of outages in the future, and I apologize for this service interruption.

Brandon Gant
CARLI

VuFind Outage this Morning (July 31)

The vufind.carli.illinois.edu service stopped responding this morning at around 10:30AM. The Apache web server was restarted and the service was back online at 12:10AM. I apologize for the delay in getting the system online, but a few staff are on vacation today, so troubleshooting took longer than normal.

Voyager Maintenance this Sunday (July 19th)

Starting at 12:01AM this Sunday, July 19th the Voyager and Oracle servers will be brought down so that their data can be transferred to a different storage array. The data transfer should take at least 10 hours. I will send an update before 10AM Sunday if it looks like the transfer is taking longer than anticipated.

To avoid confusion, we will also take VuFind offline while Voyager is offline.

Brandon Gant
CARLI

Voyager Outage at 6:36AM (July 10th)

The Production Voyager server stopped responding at 6:36AM this morning. By 6:57AM, the VMware operating system decided that the server was really offline and initiated a restart. Services were back online by 7:05AM.

Sunday Server Patching (July 12th)

This Sunday morning starting at 12:01AM, Production servers will be patched and rebooted. All patching should be completed before 10AM Sunday, July 12th.

Voyager will be down from approximately 12:01AM to 12:30AM since the Oracle server needs to be patched and rebooted while Voyager is offline.

Brandon Gant
CARLI

VuFind web server outages

There have been two VuFind outages that appear to be caused by the Apache web server going into an odd state. The first outage was Friday, June 26th at 9PM and the second was at around 5AM yesterday (Monday, July 6th). In both cases, no obvious cause was found and the issue was corrected by restarting Apache.

CONTENTdm Down for Software Upgrade

CARLI's instance of CONTENTdm is down for a software upgrade from June 23 – 28, 2015. CARLI Digital Collections is still available for searching and browsing, though collections are static as of 5:00pm, June 23, 2015.

Note to library staff: Access to the CONTENTdm Project Clients and CONTENTdm Web Administration will not be available during the upgrade. Collections cannot be created or published, and no changes can be made to collections or collection home pages.

Voyager Slow this Afternoon (Wednesday, June 17)

As many of you probably noticed, the system was extremely slow this afternoon (Wednesday, June 17) starting at around 12:30PM. We have narrowed things down to a set of Course Reserve database queries that originated through VuFind from one member library. We killed those queries at 3PM and the system recovered immediately.

We do not know exactly what is causing this yet, but we now know what to look for and which queries to kill. We will need to do more analysis on the queries before we find a permanent solution.

Brandon Gant
CARLI

Pages