What follows is a brief description of the architecture of ASE and how you can peek into this a bit closer. In order to get an overview of what ASE is doing and how it is set up we'll use a combination of internal ASE commands and OS commands.
Each ASE server running on the machine will have at least one OS process, the dataserver
binary, and may have many such running. A single server instance consists of at least one dataserver
process. Under Sybase terminology, such dataserver
processes that are cooperating and communicating with each other in shared memory are known as engines. For production use, one CPU on the server machine is often reserved as dedicated to host each Sybase engine. The engine may then be configured to hog this CPU, even when there is no active work it will idle loop polling for new incoming client connections to avoid context switches. Of course, this behaviour is entirely configurable and running one or more server instances on a single-CPU machine is not a problem - depending of course upon the load on these servers. As long as there is sufficient memory for each instance and they are started on different TCP ports, there is no problem having several instances on one machine - even of different versions.
A simple ps will show you the dataserver
processes (by default you only have one), Sybase has provided a utility named showserver that will just show you the Sybase-related processes that are active. The sp_sysmon stored procedure will monitor ASE for a given time interval, then dump out several pages of global performance data. The Engine section shows how active the server really is, regardless of the CPU usage shown on OS level.
The ASE server does I/O to the raw devices or files, these are represented internally as virtual devices. A database can reside on one or spread out on many of these virtual devices, and a virtual device can hold many databases if you want. You should locate the OS-level device files on fast disks and make sure they are not removed or messed with by other applications or sysadmins on a cleanup crusade. The path to the virtual devices are stored in the master..sysdevices table, you can list these with the sp_helpdevice stored procedure.
The server listens for incoming connections on one or several TCP ports. You identify the server by the logical Sybase server name when you connect. This logical name is listed in the interfaces file, used by both ASE server and clients such as isql. When the ASE server is started, it finds it's name in the RUN_SERVER file, looks this up in the interfaces file, finds the master entry and starts a listener on the IP / port found there. When you start isql it also looks for the logical server name in the interfaces file, but looks for the query line instead. Normally this is the same IP and port, but it gives you the option of starting the server on several different IPs and ports and configure clients in different parts of the network to utilize different pathways to the server. JDBC does not use the interfaces file, but instead lets you use the IP and port as part of the URL.
You can observe the open port and established connections with netstat or lsof -i. It is also possible to trace the communication using tcpdump or Ethereal, these utilities have support for the Tabular Data Stream (TDS) protocol used in Sybase client-server connections.
Once a client has connected it will be visible inside ASE as a task, an internal process. These are not seen as separate OS processes, but can be listed with the sp_who stored procedure.
You can configure how much memory you want ASE to use down to a certain needed minimum and up to whatever your OS and your ASE version combination will allow you. Except for doing careful analysis and clever design and SQL writing, using more of the available memory is what makes databases speed up without changing hardware. By default, most of the memory you allow allocated to ASE is used for caching data to avoid disk I/O as much as possible. Another area of memory is used to cache stored procedures in a compiled form, enabling these to be readily re-used without having to read from disk as frequently. Smaller parts are reserved for various administrative memory structures needed by the server for keeping track of each user connection, each database and so on.
On OS level you can see this normally contiguous memory chunk with ipcs -m. Inside ASE you can use sp_configure to read and modify configuration parameters such as total memory. There are several ways of determining the efficiency of memory usage, this art is explained in the Performance and Tuning Guide.
You start the server using the startserver utility. This will call the RUN_* file that you specify on the command line. If you open this RUN_SERVER file in a text editor you will find it simply calls the dataserver
executable with several parameters listed in the file. These are documented in the utility guide.
The server reads its configuration file (specified by one of the parameters) and allocates the amount of shared memory stated in that file (NOTE: this is configured in 2KB pages, not bytes), then does it's own internal distribution of this memory for various purposes. Once the memory is available, the process of initializing (you can think of it as "mounting") the virtual devices used to store databases is started. When these are verified available and OK, the databases have to go through recovery. This means reading the write-ahead transaction log and comparing any changes recorded there to the actual data stored. Transactions are redone and undone as needed to get the databases to a clean and correct state. Once the process is done the databases will be online. For the system databases the same process is applied, except for the scratchpad database tempdb which will be totally overwritten with the model template database and any remaining space zeroed out. Finally the TCP port is opened and the server is ready to accept incoming client connections.