Mercurial > illumos > illumos-gate
annotate usr/src/uts/common/rpc/svc.c @ 13988:81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Reviewed by: Boris Protopopov <Boris.Protopopov@nexenta.com>
Reviewed by: Gordon Ross <gordon.ross@nexenta.com>
Reviewed by: Jeffry Molanus <Jeffry.Molanus@nexenta.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
author | Marcel Telka <Marcel.Telka@nexenta.com> |
---|---|
date | Fri, 15 Mar 2013 16:26:41 -0400 |
parents | f91b268929d9 |
children |
rev | line source |
---|---|
0 | 1 /* |
2 * CDDL HEADER START | |
3 * | |
4 * The contents of this file are subject to the terms of the | |
1676 | 5 * Common Development and Distribution License (the "License"). |
6 * You may not use this file except in compliance with the License. | |
0 | 7 * |
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE | |
9 * or http://www.opensolaris.org/os/licensing. | |
10 * See the License for the specific language governing permissions | |
11 * and limitations under the License. | |
12 * | |
13 * When distributing Covered Code, include this CDDL HEADER in each | |
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. | |
15 * If applicable, add the following below this CDDL HEADER, with the | |
16 * fields enclosed by brackets "[]" replaced with your own identifying | |
17 * information: Portions Copyright [yyyy] [name of copyright owner] | |
18 * | |
19 * CDDL HEADER END | |
20 */ | |
390 | 21 |
0 | 22 /* |
11967
f91b268929d9
6896473 Server rpcmod panic when client trying to mount rdma/krb5
Karen Rochford <Karen.Rochford@Sun.COM>
parents:
11066
diff
changeset
|
23 * Copyright 2010 Sun Microsystems, Inc. All rights reserved. |
0 | 24 * Use is subject to license terms. |
25 */ | |
13988
81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Marcel Telka <Marcel.Telka@nexenta.com>
parents:
11967
diff
changeset
|
26 /* |
81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Marcel Telka <Marcel.Telka@nexenta.com>
parents:
11967
diff
changeset
|
27 * Copyright 2013 Nexenta Systems, Inc. All rights reserved. |
81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Marcel Telka <Marcel.Telka@nexenta.com>
parents:
11967
diff
changeset
|
28 */ |
0 | 29 |
30 /* | |
31 * Copyright 1993 OpenVision Technologies, Inc., All Rights Reserved. | |
32 */ | |
33 | |
34 /* Copyright (c) 1983, 1984, 1985, 1986, 1987, 1988, 1989 AT&T */ | |
35 /* All Rights Reserved */ | |
36 | |
37 /* | |
38 * Portions of this source code were derived from Berkeley 4.3 BSD | |
39 * under license from the Regents of the University of California. | |
40 */ | |
41 | |
42 /* | |
43 * Server-side remote procedure call interface. | |
44 * | |
45 * Master transport handle (SVCMASTERXPRT). | |
46 * The master transport handle structure is shared among service | |
47 * threads processing events on the transport. Some fields in the | |
48 * master structure are protected by locks | |
49 * - xp_req_lock protects the request queue: | |
50 * xp_req_head, xp_req_tail | |
51 * - xp_thread_lock protects the thread (clone) counts | |
52 * xp_threads, xp_detached_threads, xp_wq | |
53 * Each master transport is registered to exactly one thread pool. | |
54 * | |
55 * Clone transport handle (SVCXPRT) | |
56 * The clone transport handle structure is a per-service-thread handle | |
57 * to the transport. The structure carries all the fields/buffers used | |
58 * for request processing. A service thread or, in other words, a clone | |
59 * structure, can be linked to an arbitrary master structure to process | |
60 * requests on this transport. The master handle keeps track of reference | |
61 * counts of threads (clones) linked to it. A service thread can switch | |
62 * to another transport by unlinking its clone handle from the current | |
63 * transport and linking to a new one. Switching is relatively inexpensive | |
64 * but it involves locking (master's xprt->xp_thread_lock). | |
65 * | |
66 * Pools. | |
67 * A pool represents a kernel RPC service (NFS, Lock Manager, etc.). | |
68 * Transports related to the service are registered to the service pool. | |
69 * Service threads can switch between different transports in the pool. | |
70 * Thus, each service has its own pool of service threads. The maximum | |
71 * number of threads in a pool is pool->p_maxthreads. This limit allows | |
72 * to restrict resource usage by the service. Some fields are protected | |
73 * by locks: | |
74 * - p_req_lock protects several counts and flags: | |
75 * p_reqs, p_walkers, p_asleep, p_drowsy, p_req_cv | |
76 * - p_thread_lock governs other thread counts: | |
77 * p_threads, p_detached_threads, p_reserved_threads, p_closing | |
78 * | |
79 * In addition, each pool contains a doubly-linked list of transports, | |
80 * an `xprt-ready' queue and a creator thread (see below). Threads in | |
81 * the pool share some other parameters such as stack size and | |
82 * polling timeout. | |
83 * | |
84 * Pools are initialized through the svc_pool_create() function called from | |
85 * the nfssys() system call. However, thread creation must be done by | |
86 * the userland agent. This is done by using SVCPOOL_WAIT and | |
87 * SVCPOOL_RUN arguments to nfssys(), which call svc_wait() and | |
88 * svc_do_run(), respectively. Once the pool has been initialized, | |
89 * the userland process must set up a 'creator' thread. This thread | |
90 * should park itself in the kernel by calling svc_wait(). If | |
91 * svc_wait() returns successfully, it should fork off a new worker | |
92 * thread, which then calls svc_do_run() in order to get work. When | |
93 * that thread is complete, svc_do_run() will return, and the user | |
94 * program should call thr_exit(). | |
95 * | |
96 * When we try to register a new pool and there is an old pool with | |
97 * the same id in the doubly linked pool list (this happens when we kill | |
98 * and restart nfsd or lockd), then we unlink the old pool from the list | |
99 * and mark its state as `closing'. After that the transports can still | |
100 * process requests but new transports won't be registered. When all the | |
101 * transports and service threads associated with the pool are gone the | |
102 * creator thread (see below) will clean up the pool structure and exit. | |
103 * | |
104 * svc_queuereq() and svc_run(). | |
105 * The kernel RPC server is interrupt driven. The svc_queuereq() interrupt | |
106 * routine is called to deliver an RPC request. The service threads | |
107 * loop in svc_run(). The interrupt function queues a request on the | |
108 * transport's queue and it makes sure that the request is serviced. | |
109 * It may either wake up one of sleeping threads, or ask for a new thread | |
110 * to be created, or, if the previous request is just being picked up, do | |
111 * nothing. In the last case the service thread that is picking up the | |
112 * previous request will wake up or create the next thread. After a service | |
113 * thread processes a request and sends a reply it returns to svc_run() | |
114 * and svc_run() calls svc_poll() to find new input. | |
115 * | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
116 * There is no longer an "inconsistent" but "safe" optimization in the |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
117 * svc_queuereq() code. This "inconsistent" state was leading to |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
118 * inconsistencies between the actual number of requests and the value |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
119 * of p_reqs (the total number of requests). Because of this, hangs were |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
120 * occurring in svc_poll() where p_reqs was greater than one and no |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
121 * requests were found on the request queues. |
0 | 122 * |
123 * svc_poll(). | |
124 * In order to avoid unnecessary locking, which causes performance | |
125 * problems, we always look for a pending request on the current transport. | |
126 * If there is none we take a hint from the pool's `xprt-ready' queue. | |
127 * If the queue had an overflow we switch to the `drain' mode checking | |
128 * each transport in the pool's transport list. Once we find a | |
129 * master transport handle with a pending request we latch the request | |
130 * lock on this transport and return to svc_run(). If the request | |
131 * belongs to a transport different than the one the service thread is | |
132 * linked to we need to unlink and link again. | |
133 * | |
134 * A service thread goes asleep when there are no pending | |
135 * requests on the transports registered on the pool's transports. | |
136 * All the pool's threads sleep on the same condition variable. | |
137 * If a thread has been sleeping for too long period of time | |
138 * (by default 5 seconds) it wakes up and exits. Also when a transport | |
139 * is closing sleeping threads wake up to unlink from this transport. | |
140 * | |
141 * The `xprt-ready' queue. | |
142 * If a service thread finds no request on a transport it is currently linked | |
143 * to it will find another transport with a pending request. To make | |
144 * this search more efficient each pool has an `xprt-ready' queue. | |
145 * The queue is a FIFO. When the interrupt routine queues a request it also | |
146 * inserts a pointer to the transport into the `xprt-ready' queue. A | |
147 * thread looking for a transport with a pending request can pop up a | |
148 * transport and check for a request. The request can be already gone | |
149 * since it could be taken by a thread linked to that transport. In such a | |
150 * case we try the next hint. The `xprt-ready' queue has fixed size (by | |
151 * default 256 nodes). If it overflows svc_poll() has to switch to the | |
152 * less efficient but safe `drain' mode and walk through the pool's | |
153 * transport list. | |
154 * | |
155 * Both the svc_poll() loop and the `xprt-ready' queue are optimized | |
156 * for the peak load case that is for the situation when the queue is not | |
157 * empty, there are all the time few pending requests, and a service | |
158 * thread which has just processed a request does not go asleep but picks | |
159 * up immediately the next request. | |
160 * | |
161 * Thread creator. | |
162 * Each pool has a thread creator associated with it. The creator thread | |
163 * sleeps on a condition variable and waits for a signal to create a | |
164 * service thread. The actual thread creation is done in userland by | |
165 * the method described in "Pools" above. | |
166 * | |
167 * Signaling threads should turn on the `creator signaled' flag, and | |
168 * can avoid sending signals when the flag is on. The flag is cleared | |
169 * when the thread is created. | |
170 * | |
171 * When the pool is in closing state (ie it has been already unregistered | |
172 * from the pool list) the last thread on the last transport in the pool | |
173 * should turn the p_creator_exit flag on. The creator thread will | |
174 * clean up the pool structure and exit. | |
175 * | |
176 * Thread reservation; Detaching service threads. | |
177 * A service thread can detach itself to block for an extended amount | |
178 * of time. However, to keep the service active we need to guarantee | |
179 * at least pool->p_redline non-detached threads that can process incoming | |
180 * requests. This, the maximum number of detached and reserved threads is | |
181 * p->p_maxthreads - p->p_redline. A service thread should first acquire | |
182 * a reservation, and if the reservation was granted it can detach itself. | |
183 * If a reservation was granted but the thread does not detach itself | |
184 * it should cancel the reservation before it returns to svc_run(). | |
185 */ | |
186 | |
187 #include <sys/param.h> | |
188 #include <sys/types.h> | |
189 #include <rpc/types.h> | |
190 #include <sys/socket.h> | |
191 #include <sys/time.h> | |
192 #include <sys/tiuser.h> | |
193 #include <sys/t_kuser.h> | |
194 #include <netinet/in.h> | |
195 #include <rpc/xdr.h> | |
196 #include <rpc/auth.h> | |
197 #include <rpc/clnt.h> | |
198 #include <rpc/rpc_msg.h> | |
199 #include <rpc/svc.h> | |
200 #include <sys/proc.h> | |
201 #include <sys/user.h> | |
202 #include <sys/stream.h> | |
203 #include <sys/strsubr.h> | |
204 #include <sys/tihdr.h> | |
205 #include <sys/debug.h> | |
206 #include <sys/cmn_err.h> | |
207 #include <sys/file.h> | |
208 #include <sys/systm.h> | |
209 #include <sys/callb.h> | |
210 #include <sys/vtrace.h> | |
211 #include <sys/zone.h> | |
212 #include <nfs/nfs.h> | |
1676 | 213 #include <sys/tsol/label_macro.h> |
0 | 214 |
215 #define RQCRED_SIZE 400 /* this size is excessive */ | |
216 | |
217 /* | |
218 * Defines for svc_poll() | |
219 */ | |
220 #define SVC_EXPRTGONE ((SVCMASTERXPRT *)1) /* Transport is closing */ | |
221 #define SVC_ETIMEDOUT ((SVCMASTERXPRT *)2) /* Timeout */ | |
222 #define SVC_EINTR ((SVCMASTERXPRT *)3) /* Interrupted by signal */ | |
223 | |
224 /* | |
225 * Default stack size for service threads. | |
226 */ | |
227 #define DEFAULT_SVC_RUN_STKSIZE (0) /* default kernel stack */ | |
228 | |
229 int svc_default_stksize = DEFAULT_SVC_RUN_STKSIZE; | |
230 | |
231 /* | |
232 * Default polling timeout for service threads. | |
233 * Multiplied by hz when used. | |
234 */ | |
235 #define DEFAULT_SVC_POLL_TIMEOUT (5) /* seconds */ | |
236 | |
237 clock_t svc_default_timeout = DEFAULT_SVC_POLL_TIMEOUT; | |
238 | |
239 /* | |
240 * Size of the `xprt-ready' queue. | |
241 */ | |
242 #define DEFAULT_SVC_QSIZE (256) /* qnodes */ | |
243 | |
244 size_t svc_default_qsize = DEFAULT_SVC_QSIZE; | |
245 | |
246 /* | |
247 * Default limit for the number of service threads. | |
248 */ | |
249 #define DEFAULT_SVC_MAXTHREADS (INT16_MAX) | |
250 | |
251 int svc_default_maxthreads = DEFAULT_SVC_MAXTHREADS; | |
252 | |
253 /* | |
254 * Maximum number of requests from the same transport (in `drain' mode). | |
255 */ | |
256 #define DEFAULT_SVC_MAX_SAME_XPRT (8) | |
257 | |
258 int svc_default_max_same_xprt = DEFAULT_SVC_MAX_SAME_XPRT; | |
259 | |
260 | |
261 /* | |
262 * Default `Redline' of non-detached threads. | |
263 * Total number of detached and reserved threads in an RPC server | |
264 * thread pool is limited to pool->p_maxthreads - svc_redline. | |
265 */ | |
266 #define DEFAULT_SVC_REDLINE (1) | |
267 | |
268 int svc_default_redline = DEFAULT_SVC_REDLINE; | |
269 | |
270 /* | |
271 * A node for the `xprt-ready' queue. | |
272 * See below. | |
273 */ | |
274 struct __svcxprt_qnode { | |
275 __SVCXPRT_QNODE *q_next; | |
276 SVCMASTERXPRT *q_xprt; | |
277 }; | |
278 | |
279 /* | |
280 * Global SVC variables (private). | |
281 */ | |
282 struct svc_globals { | |
283 SVCPOOL *svc_pools; | |
284 kmutex_t svc_plock; | |
285 }; | |
286 | |
287 /* | |
288 * Debug variable to check for rdma based | |
289 * transport startup and cleanup. Contorlled | |
290 * through /etc/system. Off by default. | |
291 */ | |
292 int rdma_check = 0; | |
293 | |
294 /* | |
295 * Authentication parameters list. | |
296 */ | |
297 static caddr_t rqcred_head; | |
298 static kmutex_t rqcred_lock; | |
299 | |
300 /* | |
301 * Pointers to transport specific `rele' routines in rpcmod (set from rpcmod). | |
302 */ | |
303 void (*rpc_rele)(queue_t *, mblk_t *) = NULL; | |
304 void (*mir_rele)(queue_t *, mblk_t *) = NULL; | |
305 | |
306 /* ARGSUSED */ | |
307 void | |
308 rpc_rdma_rele(queue_t *q, mblk_t *mp) | |
309 { | |
310 } | |
311 void (*rdma_rele)(queue_t *, mblk_t *) = rpc_rdma_rele; | |
312 | |
313 | |
314 /* | |
315 * This macro picks which `rele' routine to use, based on the transport type. | |
316 */ | |
317 #define RELE_PROC(xprt) \ | |
318 ((xprt)->xp_type == T_RDMA ? rdma_rele : \ | |
319 (((xprt)->xp_type == T_CLTS) ? rpc_rele : mir_rele)) | |
320 | |
321 /* | |
322 * If true, then keep quiet about version mismatch. | |
323 * This macro is for broadcast RPC only. We have no broadcast RPC in | |
324 * kernel now but one may define a flag in the transport structure | |
325 * and redefine this macro. | |
326 */ | |
327 #define version_keepquiet(xprt) (FALSE) | |
328 | |
329 /* | |
330 * ZSD key used to retrieve zone-specific svc globals | |
331 */ | |
332 static zone_key_t svc_zone_key; | |
333 | |
334 static void svc_callout_free(SVCMASTERXPRT *); | |
335 static void svc_xprt_qinit(SVCPOOL *, size_t); | |
336 static void svc_xprt_qdestroy(SVCPOOL *); | |
337 static void svc_thread_creator(SVCPOOL *); | |
338 static void svc_creator_signal(SVCPOOL *); | |
339 static void svc_creator_signalexit(SVCPOOL *); | |
340 static void svc_pool_unregister(struct svc_globals *, SVCPOOL *); | |
341 static int svc_run(SVCPOOL *); | |
342 | |
343 /* ARGSUSED */ | |
344 static void * | |
345 svc_zoneinit(zoneid_t zoneid) | |
346 { | |
347 struct svc_globals *svc; | |
348 | |
349 svc = kmem_alloc(sizeof (*svc), KM_SLEEP); | |
350 mutex_init(&svc->svc_plock, NULL, MUTEX_DEFAULT, NULL); | |
351 svc->svc_pools = NULL; | |
352 return (svc); | |
353 } | |
354 | |
355 /* ARGSUSED */ | |
356 static void | |
357 svc_zoneshutdown(zoneid_t zoneid, void *arg) | |
358 { | |
359 struct svc_globals *svc = arg; | |
360 SVCPOOL *pool; | |
361 | |
362 mutex_enter(&svc->svc_plock); | |
363 while ((pool = svc->svc_pools) != NULL) { | |
364 svc_pool_unregister(svc, pool); | |
365 } | |
366 mutex_exit(&svc->svc_plock); | |
367 } | |
368 | |
369 /* ARGSUSED */ | |
370 static void | |
371 svc_zonefini(zoneid_t zoneid, void *arg) | |
372 { | |
373 struct svc_globals *svc = arg; | |
374 | |
375 ASSERT(svc->svc_pools == NULL); | |
376 mutex_destroy(&svc->svc_plock); | |
377 kmem_free(svc, sizeof (*svc)); | |
378 } | |
379 | |
380 /* | |
381 * Global SVC init routine. | |
382 * Initialize global generic and transport type specific structures | |
383 * used by the kernel RPC server side. This routine is called only | |
384 * once when the module is being loaded. | |
385 */ | |
386 void | |
387 svc_init() | |
388 { | |
389 zone_key_create(&svc_zone_key, svc_zoneinit, svc_zoneshutdown, | |
390 svc_zonefini); | |
391 svc_cots_init(); | |
392 svc_clts_init(); | |
393 } | |
394 | |
395 /* | |
396 * Destroy the SVCPOOL structure. | |
397 */ | |
398 static void | |
399 svc_pool_cleanup(SVCPOOL *pool) | |
400 { | |
401 ASSERT(pool->p_threads + pool->p_detached_threads == 0); | |
402 ASSERT(pool->p_lcount == 0); | |
403 ASSERT(pool->p_closing); | |
404 | |
405 /* | |
406 * Call the user supplied shutdown function. This is done | |
407 * here so the user of the pool will be able to cleanup | |
408 * service related resources. | |
409 */ | |
410 if (pool->p_shutdown != NULL) | |
411 (pool->p_shutdown)(); | |
412 | |
413 /* Destroy `xprt-ready' queue */ | |
414 svc_xprt_qdestroy(pool); | |
415 | |
416 /* Destroy transport list */ | |
417 rw_destroy(&pool->p_lrwlock); | |
418 | |
419 /* Destroy locks and condition variables */ | |
420 mutex_destroy(&pool->p_thread_lock); | |
421 mutex_destroy(&pool->p_req_lock); | |
422 cv_destroy(&pool->p_req_cv); | |
423 | |
424 /* Destroy creator's locks and condition variables */ | |
425 mutex_destroy(&pool->p_creator_lock); | |
426 cv_destroy(&pool->p_creator_cv); | |
427 mutex_destroy(&pool->p_user_lock); | |
428 cv_destroy(&pool->p_user_cv); | |
429 | |
430 /* Free pool structure */ | |
431 kmem_free(pool, sizeof (SVCPOOL)); | |
432 } | |
433 | |
434 /* | |
435 * If all the transports and service threads are already gone | |
436 * signal the creator thread to clean up and exit. | |
437 */ | |
438 static bool_t | |
439 svc_pool_tryexit(SVCPOOL *pool) | |
440 { | |
441 ASSERT(MUTEX_HELD(&pool->p_thread_lock)); | |
442 ASSERT(pool->p_closing); | |
443 | |
444 if (pool->p_threads + pool->p_detached_threads == 0) { | |
445 rw_enter(&pool->p_lrwlock, RW_READER); | |
446 if (pool->p_lcount == 0) { | |
447 /* | |
448 * Release the locks before sending a signal. | |
449 */ | |
450 rw_exit(&pool->p_lrwlock); | |
451 mutex_exit(&pool->p_thread_lock); | |
452 | |
453 /* | |
454 * Notify the creator thread to clean up and exit | |
455 * | |
456 * NOTICE: No references to the pool beyond this point! | |
457 * The pool is being destroyed. | |
458 */ | |
459 ASSERT(!MUTEX_HELD(&pool->p_thread_lock)); | |
460 svc_creator_signalexit(pool); | |
461 | |
462 return (TRUE); | |
463 } | |
464 rw_exit(&pool->p_lrwlock); | |
465 } | |
466 | |
467 ASSERT(MUTEX_HELD(&pool->p_thread_lock)); | |
468 return (FALSE); | |
469 } | |
470 | |
471 /* | |
472 * Find a pool with a given id. | |
473 */ | |
474 static SVCPOOL * | |
475 svc_pool_find(struct svc_globals *svc, int id) | |
476 { | |
477 SVCPOOL *pool; | |
478 | |
479 ASSERT(MUTEX_HELD(&svc->svc_plock)); | |
480 | |
481 /* | |
482 * Search the list for a pool with a matching id | |
483 * and register the transport handle with that pool. | |
484 */ | |
485 for (pool = svc->svc_pools; pool; pool = pool->p_next) | |
486 if (pool->p_id == id) | |
487 return (pool); | |
488 | |
489 return (NULL); | |
490 } | |
491 | |
492 /* | |
493 * PSARC 2003/523 Contract Private Interface | |
494 * svc_do_run | |
495 * Changes must be reviewed by Solaris File Sharing | |
496 * Changes must be communicated to contract-2003-523@sun.com | |
497 */ | |
498 int | |
499 svc_do_run(int id) | |
500 { | |
501 SVCPOOL *pool; | |
502 int err = 0; | |
503 struct svc_globals *svc; | |
504 | |
505 svc = zone_getspecific(svc_zone_key, curproc->p_zone); | |
506 mutex_enter(&svc->svc_plock); | |
507 | |
508 pool = svc_pool_find(svc, id); | |
509 | |
510 mutex_exit(&svc->svc_plock); | |
511 | |
512 if (pool == NULL) | |
513 return (ENOENT); | |
514 | |
515 /* | |
516 * Increment counter of pool threads now | |
517 * that a thread has been created. | |
518 */ | |
519 mutex_enter(&pool->p_thread_lock); | |
520 pool->p_threads++; | |
521 mutex_exit(&pool->p_thread_lock); | |
522 | |
523 /* Give work to the new thread. */ | |
524 err = svc_run(pool); | |
525 | |
526 return (err); | |
527 } | |
528 | |
529 /* | |
530 * Unregister a pool from the pool list. | |
531 * Set the closing state. If all the transports and service threads | |
532 * are already gone signal the creator thread to clean up and exit. | |
533 */ | |
534 static void | |
535 svc_pool_unregister(struct svc_globals *svc, SVCPOOL *pool) | |
536 { | |
537 SVCPOOL *next = pool->p_next; | |
538 SVCPOOL *prev = pool->p_prev; | |
539 | |
540 ASSERT(MUTEX_HELD(&svc->svc_plock)); | |
541 | |
542 /* Remove from the list */ | |
543 if (pool == svc->svc_pools) | |
544 svc->svc_pools = next; | |
545 if (next) | |
546 next->p_prev = prev; | |
547 if (prev) | |
548 prev->p_next = next; | |
549 pool->p_next = pool->p_prev = NULL; | |
550 | |
551 /* | |
552 * Offline the pool. Mark the pool as closing. | |
553 * If there are no transports in this pool notify | |
554 * the creator thread to clean it up and exit. | |
555 */ | |
556 mutex_enter(&pool->p_thread_lock); | |
557 if (pool->p_offline != NULL) | |
558 (pool->p_offline)(); | |
559 pool->p_closing = TRUE; | |
560 if (svc_pool_tryexit(pool)) | |
561 return; | |
562 mutex_exit(&pool->p_thread_lock); | |
563 } | |
564 | |
565 /* | |
566 * Register a pool with a given id in the global doubly linked pool list. | |
567 * - if there is a pool with the same id in the list then unregister it | |
568 * - insert the new pool into the list. | |
569 */ | |
570 static void | |
571 svc_pool_register(struct svc_globals *svc, SVCPOOL *pool, int id) | |
572 { | |
573 SVCPOOL *old_pool; | |
574 | |
575 /* | |
576 * If there is a pool with the same id then remove it from | |
577 * the list and mark the pool as closing. | |
578 */ | |
579 mutex_enter(&svc->svc_plock); | |
580 | |
581 if (old_pool = svc_pool_find(svc, id)) | |
582 svc_pool_unregister(svc, old_pool); | |
583 | |
584 /* Insert into the doubly linked list */ | |
585 pool->p_id = id; | |
586 pool->p_next = svc->svc_pools; | |
587 pool->p_prev = NULL; | |
588 if (svc->svc_pools) | |
589 svc->svc_pools->p_prev = pool; | |
590 svc->svc_pools = pool; | |
591 | |
592 mutex_exit(&svc->svc_plock); | |
593 } | |
594 | |
595 /* | |
596 * Initialize a newly created pool structure | |
597 */ | |
598 static int | |
599 svc_pool_init(SVCPOOL *pool, uint_t maxthreads, uint_t redline, | |
600 uint_t qsize, uint_t timeout, uint_t stksize, uint_t max_same_xprt) | |
601 { | |
602 klwp_t *lwp = ttolwp(curthread); | |
603 | |
604 ASSERT(pool); | |
605 | |
606 if (maxthreads == 0) | |
607 maxthreads = svc_default_maxthreads; | |
608 if (redline == 0) | |
609 redline = svc_default_redline; | |
610 if (qsize == 0) | |
611 qsize = svc_default_qsize; | |
612 if (timeout == 0) | |
613 timeout = svc_default_timeout; | |
614 if (stksize == 0) | |
615 stksize = svc_default_stksize; | |
616 if (max_same_xprt == 0) | |
617 max_same_xprt = svc_default_max_same_xprt; | |
618 | |
619 if (maxthreads < redline) | |
620 return (EINVAL); | |
621 | |
622 /* Allocate and initialize the `xprt-ready' queue */ | |
623 svc_xprt_qinit(pool, qsize); | |
624 | |
625 /* Initialize doubly-linked xprt list */ | |
626 rw_init(&pool->p_lrwlock, NULL, RW_DEFAULT, NULL); | |
627 | |
628 /* | |
629 * Setting lwp_childstksz on the current lwp so that | |
630 * descendants of this lwp get the modified stacksize, if | |
631 * it is defined. It is important that either this lwp or | |
632 * one of its descendants do the actual servicepool thread | |
633 * creation to maintain the stacksize inheritance. | |
634 */ | |
635 if (lwp != NULL) | |
636 lwp->lwp_childstksz = stksize; | |
637 | |
638 /* Initialize thread limits, locks and condition variables */ | |
639 pool->p_maxthreads = maxthreads; | |
640 pool->p_redline = redline; | |
641 pool->p_timeout = timeout * hz; | |
642 pool->p_stksize = stksize; | |
643 pool->p_max_same_xprt = max_same_xprt; | |
644 mutex_init(&pool->p_thread_lock, NULL, MUTEX_DEFAULT, NULL); | |
645 mutex_init(&pool->p_req_lock, NULL, MUTEX_DEFAULT, NULL); | |
646 cv_init(&pool->p_req_cv, NULL, CV_DEFAULT, NULL); | |
647 | |
648 /* Initialize userland creator */ | |
649 pool->p_user_exit = FALSE; | |
650 pool->p_signal_create_thread = FALSE; | |
651 pool->p_user_waiting = FALSE; | |
652 mutex_init(&pool->p_user_lock, NULL, MUTEX_DEFAULT, NULL); | |
653 cv_init(&pool->p_user_cv, NULL, CV_DEFAULT, NULL); | |
654 | |
655 /* Initialize the creator and start the creator thread */ | |
656 pool->p_creator_exit = FALSE; | |
657 mutex_init(&pool->p_creator_lock, NULL, MUTEX_DEFAULT, NULL); | |
658 cv_init(&pool->p_creator_cv, NULL, CV_DEFAULT, NULL); | |
659 | |
660 (void) zthread_create(NULL, pool->p_stksize, svc_thread_creator, | |
661 pool, 0, minclsyspri); | |
662 | |
663 return (0); | |
664 } | |
665 | |
666 /* | |
667 * PSARC 2003/523 Contract Private Interface | |
668 * svc_pool_create | |
669 * Changes must be reviewed by Solaris File Sharing | |
670 * Changes must be communicated to contract-2003-523@sun.com | |
671 * | |
672 * Create an kernel RPC server-side thread/transport pool. | |
673 * | |
674 * This is public interface for creation of a server RPC thread pool | |
675 * for a given service provider. Transports registered with the pool's id | |
676 * will be served by a pool's threads. This function is called from the | |
677 * nfssys() system call. | |
678 */ | |
679 int | |
680 svc_pool_create(struct svcpool_args *args) | |
681 { | |
682 SVCPOOL *pool; | |
683 int error; | |
684 struct svc_globals *svc; | |
685 | |
686 /* | |
687 * Caller should check credentials in a way appropriate | |
688 * in the context of the call. | |
689 */ | |
690 | |
691 svc = zone_getspecific(svc_zone_key, curproc->p_zone); | |
692 /* Allocate a new pool */ | |
693 pool = kmem_zalloc(sizeof (SVCPOOL), KM_SLEEP); | |
694 | |
695 /* | |
696 * Initialize the pool structure and create a creator thread. | |
697 */ | |
698 error = svc_pool_init(pool, args->maxthreads, args->redline, | |
699 args->qsize, args->timeout, args->stksize, args->max_same_xprt); | |
700 | |
701 if (error) { | |
702 kmem_free(pool, sizeof (SVCPOOL)); | |
703 return (error); | |
704 } | |
705 | |
706 /* Register the pool with the global pool list */ | |
707 svc_pool_register(svc, pool, args->id); | |
708 | |
709 return (0); | |
710 } | |
711 | |
712 int | |
713 svc_pool_control(int id, int cmd, void *arg) | |
714 { | |
715 SVCPOOL *pool; | |
716 struct svc_globals *svc; | |
717 | |
718 svc = zone_getspecific(svc_zone_key, curproc->p_zone); | |
719 | |
720 switch (cmd) { | |
721 case SVCPSET_SHUTDOWN_PROC: | |
722 /* | |
723 * Search the list for a pool with a matching id | |
724 * and register the transport handle with that pool. | |
725 */ | |
726 mutex_enter(&svc->svc_plock); | |
727 | |
728 if ((pool = svc_pool_find(svc, id)) == NULL) { | |
729 mutex_exit(&svc->svc_plock); | |
730 return (ENOENT); | |
731 } | |
732 /* | |
733 * Grab the transport list lock before releasing the | |
734 * pool list lock | |
735 */ | |
736 rw_enter(&pool->p_lrwlock, RW_WRITER); | |
737 mutex_exit(&svc->svc_plock); | |
738 | |
739 pool->p_shutdown = *((void (*)())arg); | |
740 | |
741 rw_exit(&pool->p_lrwlock); | |
742 | |
743 return (0); | |
744 case SVCPSET_UNREGISTER_PROC: | |
745 /* | |
746 * Search the list for a pool with a matching id | |
747 * and register the unregister callback handle with that pool. | |
748 */ | |
749 mutex_enter(&svc->svc_plock); | |
750 | |
751 if ((pool = svc_pool_find(svc, id)) == NULL) { | |
752 mutex_exit(&svc->svc_plock); | |
753 return (ENOENT); | |
754 } | |
755 /* | |
756 * Grab the transport list lock before releasing the | |
757 * pool list lock | |
758 */ | |
759 rw_enter(&pool->p_lrwlock, RW_WRITER); | |
760 mutex_exit(&svc->svc_plock); | |
761 | |
762 pool->p_offline = *((void (*)())arg); | |
763 | |
764 rw_exit(&pool->p_lrwlock); | |
765 | |
766 return (0); | |
767 default: | |
768 return (EINVAL); | |
769 } | |
770 } | |
771 | |
772 /* | |
773 * Pool's transport list manipulation routines. | |
774 * - svc_xprt_register() | |
775 * - svc_xprt_unregister() | |
776 * | |
777 * svc_xprt_register() is called from svc_tli_kcreate() to | |
778 * insert a new master transport handle into the doubly linked | |
779 * list of server transport handles (one list per pool). | |
780 * | |
781 * The list is used by svc_poll(), when it operates in `drain' | |
782 * mode, to search for a next transport with a pending request. | |
783 */ | |
784 | |
785 int | |
786 svc_xprt_register(SVCMASTERXPRT *xprt, int id) | |
787 { | |
788 SVCMASTERXPRT *prev, *next; | |
789 SVCPOOL *pool; | |
790 struct svc_globals *svc; | |
791 | |
792 svc = zone_getspecific(svc_zone_key, curproc->p_zone); | |
793 /* | |
794 * Search the list for a pool with a matching id | |
795 * and register the transport handle with that pool. | |
796 */ | |
797 mutex_enter(&svc->svc_plock); | |
798 | |
799 if ((pool = svc_pool_find(svc, id)) == NULL) { | |
800 mutex_exit(&svc->svc_plock); | |
801 return (ENOENT); | |
802 } | |
803 | |
804 /* Grab the transport list lock before releasing the pool list lock */ | |
805 rw_enter(&pool->p_lrwlock, RW_WRITER); | |
806 mutex_exit(&svc->svc_plock); | |
807 | |
808 /* Don't register new transports when the pool is in closing state */ | |
809 if (pool->p_closing) { | |
810 rw_exit(&pool->p_lrwlock); | |
811 return (EBUSY); | |
812 } | |
813 | |
814 /* | |
815 * Initialize xp_pool to point to the pool. | |
816 * We don't want to go through the pool list every time. | |
817 */ | |
818 xprt->xp_pool = pool; | |
819 | |
820 /* | |
821 * Insert a transport handle into the list. | |
822 * The list head points to the most recently inserted transport. | |
823 */ | |
824 if (pool->p_lhead == NULL) | |
825 pool->p_lhead = xprt->xp_prev = xprt->xp_next = xprt; | |
826 else { | |
827 next = pool->p_lhead; | |
828 prev = pool->p_lhead->xp_prev; | |
829 | |
830 xprt->xp_next = next; | |
831 xprt->xp_prev = prev; | |
832 | |
833 pool->p_lhead = prev->xp_next = next->xp_prev = xprt; | |
834 } | |
835 | |
836 /* Increment the transports count */ | |
837 pool->p_lcount++; | |
838 | |
839 rw_exit(&pool->p_lrwlock); | |
840 return (0); | |
841 } | |
842 | |
843 /* | |
844 * Called from svc_xprt_cleanup() to remove a master transport handle | |
845 * from the pool's list of server transports (when a transport is | |
846 * being destroyed). | |
847 */ | |
848 void | |
849 svc_xprt_unregister(SVCMASTERXPRT *xprt) | |
850 { | |
851 SVCPOOL *pool = xprt->xp_pool; | |
852 | |
853 /* | |
854 * Unlink xprt from the list. | |
855 * If the list head points to this xprt then move it | |
856 * to the next xprt or reset to NULL if this is the last | |
857 * xprt in the list. | |
858 */ | |
859 rw_enter(&pool->p_lrwlock, RW_WRITER); | |
860 | |
861 if (xprt == xprt->xp_next) | |
862 pool->p_lhead = NULL; | |
863 else { | |
864 SVCMASTERXPRT *next = xprt->xp_next; | |
865 SVCMASTERXPRT *prev = xprt->xp_prev; | |
866 | |
867 next->xp_prev = prev; | |
868 prev->xp_next = next; | |
869 | |
870 if (pool->p_lhead == xprt) | |
871 pool->p_lhead = next; | |
872 } | |
873 | |
874 xprt->xp_next = xprt->xp_prev = NULL; | |
875 | |
876 /* Decrement list count */ | |
877 pool->p_lcount--; | |
878 | |
879 rw_exit(&pool->p_lrwlock); | |
880 } | |
881 | |
882 static void | |
883 svc_xprt_qdestroy(SVCPOOL *pool) | |
884 { | |
885 mutex_destroy(&pool->p_qend_lock); | |
886 kmem_free(pool->p_qbody, pool->p_qsize * sizeof (__SVCXPRT_QNODE)); | |
887 } | |
888 | |
889 /* | |
890 * Initialize an `xprt-ready' queue for a given pool. | |
891 */ | |
892 static void | |
893 svc_xprt_qinit(SVCPOOL *pool, size_t qsize) | |
894 { | |
895 int i; | |
896 | |
897 pool->p_qsize = qsize; | |
898 pool->p_qbody = kmem_zalloc(pool->p_qsize * sizeof (__SVCXPRT_QNODE), | |
899 KM_SLEEP); | |
900 | |
901 for (i = 0; i < pool->p_qsize - 1; i++) | |
902 pool->p_qbody[i].q_next = &(pool->p_qbody[i+1]); | |
903 | |
904 pool->p_qbody[pool->p_qsize-1].q_next = &(pool->p_qbody[0]); | |
905 pool->p_qtop = &(pool->p_qbody[0]); | |
906 pool->p_qend = &(pool->p_qbody[0]); | |
907 | |
908 mutex_init(&pool->p_qend_lock, NULL, MUTEX_DEFAULT, NULL); | |
909 } | |
910 | |
911 /* | |
912 * Called from the svc_queuereq() interrupt routine to queue | |
913 * a hint for svc_poll() which transport has a pending request. | |
914 * - insert a pointer to xprt into the xprt-ready queue (FIFO) | |
915 * - if the xprt-ready queue is full turn the overflow flag on. | |
916 * | |
917 * NOTICE: pool->p_qtop is protected by the the pool's request lock | |
918 * and the caller (svc_queuereq()) must hold the lock. | |
919 */ | |
920 static void | |
921 svc_xprt_qput(SVCPOOL *pool, SVCMASTERXPRT *xprt) | |
922 { | |
923 ASSERT(MUTEX_HELD(&pool->p_req_lock)); | |
924 | |
925 /* If the overflow flag is there is nothing we can do */ | |
926 if (pool->p_qoverflow) | |
927 return; | |
928 | |
929 /* If the queue is full turn the overflow flag on and exit */ | |
930 if (pool->p_qtop->q_next == pool->p_qend) { | |
931 mutex_enter(&pool->p_qend_lock); | |
932 if (pool->p_qtop->q_next == pool->p_qend) { | |
933 pool->p_qoverflow = TRUE; | |
934 mutex_exit(&pool->p_qend_lock); | |
935 return; | |
936 } | |
937 mutex_exit(&pool->p_qend_lock); | |
938 } | |
939 | |
940 /* Insert a hint and move pool->p_qtop */ | |
941 pool->p_qtop->q_xprt = xprt; | |
942 pool->p_qtop = pool->p_qtop->q_next; | |
943 } | |
944 | |
945 /* | |
946 * Called from svc_poll() to get a hint which transport has a | |
947 * pending request. Returns a pointer to a transport or NULL if the | |
948 * `xprt-ready' queue is empty. | |
949 * | |
950 * Since we do not acquire the pool's request lock while checking if | |
951 * the queue is empty we may miss a request that is just being delivered. | |
952 * However this is ok since svc_poll() will retry again until the | |
953 * count indicates that there are pending requests for this pool. | |
954 */ | |
955 static SVCMASTERXPRT * | |
956 svc_xprt_qget(SVCPOOL *pool) | |
957 { | |
958 SVCMASTERXPRT *xprt; | |
959 | |
960 mutex_enter(&pool->p_qend_lock); | |
961 do { | |
962 /* | |
963 * If the queue is empty return NULL. | |
964 * Since we do not acquire the pool's request lock which | |
965 * protects pool->p_qtop this is not exact check. However, | |
966 * this is safe - if we miss a request here svc_poll() | |
967 * will retry again. | |
968 */ | |
969 if (pool->p_qend == pool->p_qtop) { | |
970 mutex_exit(&pool->p_qend_lock); | |
971 return (NULL); | |
972 } | |
973 | |
974 /* Get a hint and move pool->p_qend */ | |
975 xprt = pool->p_qend->q_xprt; | |
976 pool->p_qend = pool->p_qend->q_next; | |
977 | |
978 /* Skip fields deleted by svc_xprt_qdelete() */ | |
979 } while (xprt == NULL); | |
980 mutex_exit(&pool->p_qend_lock); | |
981 | |
982 return (xprt); | |
983 } | |
984 | |
985 /* | |
986 * Delete all the references to a transport handle that | |
987 * is being destroyed from the xprt-ready queue. | |
988 * Deleted pointers are replaced with NULLs. | |
989 */ | |
990 static void | |
991 svc_xprt_qdelete(SVCPOOL *pool, SVCMASTERXPRT *xprt) | |
992 { | |
13988
81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Marcel Telka <Marcel.Telka@nexenta.com>
parents:
11967
diff
changeset
|
993 __SVCXPRT_QNODE *q; |
0 | 994 |
13988
81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Marcel Telka <Marcel.Telka@nexenta.com>
parents:
11967
diff
changeset
|
995 mutex_enter(&pool->p_req_lock); |
81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Marcel Telka <Marcel.Telka@nexenta.com>
parents:
11967
diff
changeset
|
996 for (q = pool->p_qend; q != pool->p_qtop; q = q->q_next) { |
0 | 997 if (q->q_xprt == xprt) |
998 q->q_xprt = NULL; | |
999 } | |
13988
81670e8b6dd9
3620 Corruption of the `xprt-ready' queue in svc_xprt_qdelete()
Marcel Telka <Marcel.Telka@nexenta.com>
parents:
11967
diff
changeset
|
1000 mutex_exit(&pool->p_req_lock); |
0 | 1001 } |
1002 | |
1003 /* | |
1004 * Destructor for a master server transport handle. | |
1005 * - if there are no more non-detached threads linked to this transport | |
1006 * then, if requested, call xp_closeproc (we don't wait for detached | |
1007 * threads linked to this transport to complete). | |
1008 * - if there are no more threads linked to this | |
1009 * transport then | |
1010 * a) remove references to this transport from the xprt-ready queue | |
1011 * b) remove a reference to this transport from the pool's transport list | |
1012 * c) call a transport specific `destroy' function | |
1013 * d) cancel remaining thread reservations. | |
1014 * | |
1015 * NOTICE: Caller must hold the transport's thread lock. | |
1016 */ | |
1017 static void | |
1018 svc_xprt_cleanup(SVCMASTERXPRT *xprt, bool_t detached) | |
1019 { | |
1020 ASSERT(MUTEX_HELD(&xprt->xp_thread_lock)); | |
1021 ASSERT(xprt->xp_wq == NULL); | |
1022 | |
1023 /* | |
1024 * If called from the last non-detached thread | |
1025 * it should call the closeproc on this transport. | |
1026 */ | |
1027 if (!detached && xprt->xp_threads == 0 && xprt->xp_closeproc) { | |
1028 (*(xprt->xp_closeproc)) (xprt); | |
1029 } | |
1030 | |
1031 if (xprt->xp_threads + xprt->xp_detached_threads > 0) | |
1032 mutex_exit(&xprt->xp_thread_lock); | |
1033 else { | |
1034 /* Remove references to xprt from the `xprt-ready' queue */ | |
1035 svc_xprt_qdelete(xprt->xp_pool, xprt); | |
1036 | |
1037 /* Unregister xprt from the pool's transport list */ | |
1038 svc_xprt_unregister(xprt); | |
1039 svc_callout_free(xprt); | |
1040 SVC_DESTROY(xprt); | |
1041 } | |
1042 } | |
1043 | |
1044 /* | |
1045 * Find a dispatch routine for a given prog/vers pair. | |
1046 * This function is called from svc_getreq() to search the callout | |
1047 * table for an entry with a matching RPC program number `prog' | |
1048 * and a version range that covers `vers'. | |
1049 * - if it finds a matching entry it returns pointer to the dispatch routine | |
1050 * - otherwise it returns NULL and, if `minp' or `maxp' are not NULL, | |
1051 * fills them with, respectively, lowest version and highest version | |
1052 * supported for the program `prog' | |
1053 */ | |
1054 static SVC_DISPATCH * | |
1055 svc_callout_find(SVCXPRT *xprt, rpcprog_t prog, rpcvers_t vers, | |
1056 rpcvers_t *vers_min, rpcvers_t *vers_max) | |
1057 { | |
1058 SVC_CALLOUT_TABLE *sct = xprt->xp_sct; | |
1059 int i; | |
1060 | |
1061 *vers_min = ~(rpcvers_t)0; | |
1062 *vers_max = 0; | |
1063 | |
1064 for (i = 0; i < sct->sct_size; i++) { | |
1065 SVC_CALLOUT *sc = &sct->sct_sc[i]; | |
1066 | |
1067 if (prog == sc->sc_prog) { | |
1068 if (vers >= sc->sc_versmin && vers <= sc->sc_versmax) | |
1069 return (sc->sc_dispatch); | |
1070 | |
1071 if (*vers_max < sc->sc_versmax) | |
1072 *vers_max = sc->sc_versmax; | |
1073 if (*vers_min > sc->sc_versmin) | |
1074 *vers_min = sc->sc_versmin; | |
1075 } | |
1076 } | |
1077 | |
1078 return (NULL); | |
1079 } | |
1080 | |
1081 /* | |
1082 * Optionally free callout table allocated for this transport by | |
1083 * the service provider. | |
1084 */ | |
1085 static void | |
1086 svc_callout_free(SVCMASTERXPRT *xprt) | |
1087 { | |
1088 SVC_CALLOUT_TABLE *sct = xprt->xp_sct; | |
1089 | |
1090 if (sct->sct_free) { | |
1091 kmem_free(sct->sct_sc, sct->sct_size * sizeof (SVC_CALLOUT)); | |
1092 kmem_free(sct, sizeof (SVC_CALLOUT_TABLE)); | |
1093 } | |
1094 } | |
1095 | |
1096 /* | |
1097 * Send a reply to an RPC request | |
1098 * | |
1099 * PSARC 2003/523 Contract Private Interface | |
1100 * svc_sendreply | |
1101 * Changes must be reviewed by Solaris File Sharing | |
1102 * Changes must be communicated to contract-2003-523@sun.com | |
1103 */ | |
1104 bool_t | |
1105 svc_sendreply(const SVCXPRT *clone_xprt, const xdrproc_t xdr_results, | |
1106 const caddr_t xdr_location) | |
1107 { | |
1108 struct rpc_msg rply; | |
1109 | |
1110 rply.rm_direction = REPLY; | |
1111 rply.rm_reply.rp_stat = MSG_ACCEPTED; | |
1112 rply.acpted_rply.ar_verf = clone_xprt->xp_verf; | |
1113 rply.acpted_rply.ar_stat = SUCCESS; | |
1114 rply.acpted_rply.ar_results.where = xdr_location; | |
1115 rply.acpted_rply.ar_results.proc = xdr_results; | |
1116 | |
1117 return (SVC_REPLY((SVCXPRT *)clone_xprt, &rply)); | |
1118 } | |
1119 | |
1120 /* | |
1121 * No procedure error reply | |
1122 * | |
1123 * PSARC 2003/523 Contract Private Interface | |
1124 * svcerr_noproc | |
1125 * Changes must be reviewed by Solaris File Sharing | |
1126 * Changes must be communicated to contract-2003-523@sun.com | |
1127 */ | |
1128 void | |
1129 svcerr_noproc(const SVCXPRT *clone_xprt) | |
1130 { | |
1131 struct rpc_msg rply; | |
1132 | |
1133 rply.rm_direction = REPLY; | |
1134 rply.rm_reply.rp_stat = MSG_ACCEPTED; | |
1135 rply.acpted_rply.ar_verf = clone_xprt->xp_verf; | |
1136 rply.acpted_rply.ar_stat = PROC_UNAVAIL; | |
1137 SVC_FREERES((SVCXPRT *)clone_xprt); | |
1138 SVC_REPLY((SVCXPRT *)clone_xprt, &rply); | |
1139 } | |
1140 | |
1141 /* | |
1142 * Can't decode arguments error reply | |
1143 * | |
1144 * PSARC 2003/523 Contract Private Interface | |
1145 * svcerr_decode | |
1146 * Changes must be reviewed by Solaris File Sharing | |
1147 * Changes must be communicated to contract-2003-523@sun.com | |
1148 */ | |
1149 void | |
1150 svcerr_decode(const SVCXPRT *clone_xprt) | |
1151 { | |
1152 struct rpc_msg rply; | |
1153 | |
1154 rply.rm_direction = REPLY; | |
1155 rply.rm_reply.rp_stat = MSG_ACCEPTED; | |
1156 rply.acpted_rply.ar_verf = clone_xprt->xp_verf; | |
1157 rply.acpted_rply.ar_stat = GARBAGE_ARGS; | |
1158 SVC_FREERES((SVCXPRT *)clone_xprt); | |
1159 SVC_REPLY((SVCXPRT *)clone_xprt, &rply); | |
1160 } | |
1161 | |
1162 /* | |
1163 * Some system error | |
1164 */ | |
1165 void | |
1166 svcerr_systemerr(const SVCXPRT *clone_xprt) | |
1167 { | |
1168 struct rpc_msg rply; | |
1169 | |
1170 rply.rm_direction = REPLY; | |
1171 rply.rm_reply.rp_stat = MSG_ACCEPTED; | |
1172 rply.acpted_rply.ar_verf = clone_xprt->xp_verf; | |
1173 rply.acpted_rply.ar_stat = SYSTEM_ERR; | |
1174 SVC_FREERES((SVCXPRT *)clone_xprt); | |
1175 SVC_REPLY((SVCXPRT *)clone_xprt, &rply); | |
1176 } | |
1177 | |
1178 /* | |
1179 * Authentication error reply | |
1180 */ | |
1181 void | |
1182 svcerr_auth(const SVCXPRT *clone_xprt, const enum auth_stat why) | |
1183 { | |
1184 struct rpc_msg rply; | |
1185 | |
1186 rply.rm_direction = REPLY; | |
1187 rply.rm_reply.rp_stat = MSG_DENIED; | |
1188 rply.rjcted_rply.rj_stat = AUTH_ERROR; | |
1189 rply.rjcted_rply.rj_why = why; | |
1190 SVC_FREERES((SVCXPRT *)clone_xprt); | |
1191 SVC_REPLY((SVCXPRT *)clone_xprt, &rply); | |
1192 } | |
1193 | |
1194 /* | |
1195 * Authentication too weak error reply | |
1196 */ | |
1197 void | |
1198 svcerr_weakauth(const SVCXPRT *clone_xprt) | |
1199 { | |
1200 svcerr_auth((SVCXPRT *)clone_xprt, AUTH_TOOWEAK); | |
1201 } | |
1202 | |
1203 /* | |
6786
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1204 * Authentication error; bad credentials |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1205 */ |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1206 void |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1207 svcerr_badcred(const SVCXPRT *clone_xprt) |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1208 { |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1209 struct rpc_msg rply; |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1210 |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1211 rply.rm_direction = REPLY; |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1212 rply.rm_reply.rp_stat = MSG_DENIED; |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1213 rply.rjcted_rply.rj_stat = AUTH_ERROR; |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1214 rply.rjcted_rply.rj_why = AUTH_BADCRED; |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1215 SVC_FREERES((SVCXPRT *)clone_xprt); |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1216 SVC_REPLY((SVCXPRT *)clone_xprt, &rply); |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1217 } |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1218 |
8978aafca942
6700655 NFSv4.0 server silently drops reply on sec_svc_getcred() failure
rmesta
parents:
4741
diff
changeset
|
1219 /* |
0 | 1220 * Program unavailable error reply |
1221 * | |
1222 * PSARC 2003/523 Contract Private Interface | |
1223 * svcerr_noprog | |
1224 * Changes must be reviewed by Solaris File Sharing | |
1225 * Changes must be communicated to contract-2003-523@sun.com | |
1226 */ | |
1227 void | |
1228 svcerr_noprog(const SVCXPRT *clone_xprt) | |
1229 { | |
1230 struct rpc_msg rply; | |
1231 | |
1232 rply.rm_direction = REPLY; | |
1233 rply.rm_reply.rp_stat = MSG_ACCEPTED; | |
1234 rply.acpted_rply.ar_verf = clone_xprt->xp_verf; | |
1235 rply.acpted_rply.ar_stat = PROG_UNAVAIL; | |
1236 SVC_FREERES((SVCXPRT *)clone_xprt); | |
1237 SVC_REPLY((SVCXPRT *)clone_xprt, &rply); | |
1238 } | |
1239 | |
1240 /* | |
1241 * Program version mismatch error reply | |
1242 * | |
1243 * PSARC 2003/523 Contract Private Interface | |
1244 * svcerr_progvers | |
1245 * Changes must be reviewed by Solaris File Sharing | |
1246 * Changes must be communicated to contract-2003-523@sun.com | |
1247 */ | |
1248 void | |
1249 svcerr_progvers(const SVCXPRT *clone_xprt, | |
1250 const rpcvers_t low_vers, const rpcvers_t high_vers) | |
1251 { | |
1252 struct rpc_msg rply; | |
1253 | |
1254 rply.rm_direction = REPLY; | |
1255 rply.rm_reply.rp_stat = MSG_ACCEPTED; | |
1256 rply.acpted_rply.ar_verf = clone_xprt->xp_verf; | |
1257 rply.acpted_rply.ar_stat = PROG_MISMATCH; | |
1258 rply.acpted_rply.ar_vers.low = low_vers; | |
1259 rply.acpted_rply.ar_vers.high = high_vers; | |
1260 SVC_FREERES((SVCXPRT *)clone_xprt); | |
1261 SVC_REPLY((SVCXPRT *)clone_xprt, &rply); | |
1262 } | |
1263 | |
1264 /* | |
1265 * Get server side input from some transport. | |
1266 * | |
1267 * Statement of authentication parameters management: | |
1268 * This function owns and manages all authentication parameters, specifically | |
1269 * the "raw" parameters (msg.rm_call.cb_cred and msg.rm_call.cb_verf) and | |
1270 * the "cooked" credentials (rqst->rq_clntcred). | |
1271 * However, this function does not know the structure of the cooked | |
1272 * credentials, so it make the following assumptions: | |
1273 * a) the structure is contiguous (no pointers), and | |
1274 * b) the cred structure size does not exceed RQCRED_SIZE bytes. | |
1275 * In all events, all three parameters are freed upon exit from this routine. | |
1276 * The storage is trivially managed on the call stack in user land, but | |
1277 * is malloced in kernel land. | |
1278 * | |
1279 * Note: the xprt's xp_svc_lock is not held while the service's dispatch | |
1280 * routine is running. If we decide to implement svc_unregister(), we'll | |
1281 * need to decide whether it's okay for a thread to unregister a service | |
1282 * while a request is being processed. If we decide that this is a | |
1283 * problem, we can probably use some sort of reference counting scheme to | |
1284 * keep the callout entry from going away until the request has completed. | |
1285 */ | |
1286 static void | |
1287 svc_getreq( | |
1288 SVCXPRT *clone_xprt, /* clone transport handle */ | |
1289 mblk_t *mp) | |
1290 { | |
1291 struct rpc_msg msg; | |
1292 struct svc_req r; | |
1293 char *cred_area; /* too big to allocate on call stack */ | |
1294 | |
1295 TRACE_0(TR_FAC_KRPC, TR_SVC_GETREQ_START, | |
1296 "svc_getreq_start:"); | |
1297 | |
1298 ASSERT(clone_xprt->xp_master != NULL); | |
8778
b4169d2ab299
PSARC 2007/670 db_credp update
Erik Nordmark <Erik.Nordmark@Sun.COM>
parents:
8695
diff
changeset
|
1299 ASSERT(!is_system_labeled() || msg_getcred(mp, NULL) != NULL || |
1676 | 1300 mp->b_datap->db_type != M_DATA); |
0 | 1301 |
1302 /* | |
1303 * Firstly, allocate the authentication parameters' storage | |
1304 */ | |
1305 mutex_enter(&rqcred_lock); | |
1306 if (rqcred_head) { | |
1307 cred_area = rqcred_head; | |
1308 | |
1309 /* LINTED pointer alignment */ | |
1310 rqcred_head = *(caddr_t *)rqcred_head; | |
1311 mutex_exit(&rqcred_lock); | |
1312 } else { | |
1313 mutex_exit(&rqcred_lock); | |
1314 cred_area = kmem_alloc(2 * MAX_AUTH_BYTES + RQCRED_SIZE, | |
1315 KM_SLEEP); | |
1316 } | |
1317 msg.rm_call.cb_cred.oa_base = cred_area; | |
1318 msg.rm_call.cb_verf.oa_base = &(cred_area[MAX_AUTH_BYTES]); | |
1319 r.rq_clntcred = &(cred_area[2 * MAX_AUTH_BYTES]); | |
1320 | |
1321 /* | |
1676 | 1322 * underlying transport recv routine may modify mblk data |
1323 * and make it difficult to extract label afterwards. So | |
1324 * get the label from the raw mblk data now. | |
1325 */ | |
1326 if (is_system_labeled()) { | |
8778
b4169d2ab299
PSARC 2007/670 db_credp update
Erik Nordmark <Erik.Nordmark@Sun.COM>
parents:
8695
diff
changeset
|
1327 cred_t *cr; |
1676 | 1328 |
1329 r.rq_label = kmem_alloc(sizeof (bslabel_t), KM_SLEEP); | |
8778
b4169d2ab299
PSARC 2007/670 db_credp update
Erik Nordmark <Erik.Nordmark@Sun.COM>
parents:
8695
diff
changeset
|
1330 cr = msg_getcred(mp, NULL); |
b4169d2ab299
PSARC 2007/670 db_credp update
Erik Nordmark <Erik.Nordmark@Sun.COM>
parents:
8695
diff
changeset
|
1331 ASSERT(cr != NULL); |
b4169d2ab299
PSARC 2007/670 db_credp update
Erik Nordmark <Erik.Nordmark@Sun.COM>
parents:
8695
diff
changeset
|
1332 |
b4169d2ab299
PSARC 2007/670 db_credp update
Erik Nordmark <Erik.Nordmark@Sun.COM>
parents:
8695
diff
changeset
|
1333 bcopy(label2bslabel(crgetlabel(cr)), r.rq_label, |
1676 | 1334 sizeof (bslabel_t)); |
1335 } else { | |
1336 r.rq_label = NULL; | |
1337 } | |
1338 | |
1339 /* | |
0 | 1340 * Now receive a message from the transport. |
1341 */ | |
1342 if (SVC_RECV(clone_xprt, mp, &msg)) { | |
1343 void (*dispatchroutine) (struct svc_req *, SVCXPRT *); | |
1344 rpcvers_t vers_min; | |
1345 rpcvers_t vers_max; | |
1346 bool_t no_dispatch; | |
1347 enum auth_stat why; | |
1348 | |
1349 /* | |
1350 * Find the registered program and call its | |
1351 * dispatch routine. | |
1352 */ | |
1353 r.rq_xprt = clone_xprt; | |
1354 r.rq_prog = msg.rm_call.cb_prog; | |
1355 r.rq_vers = msg.rm_call.cb_vers; | |
1356 r.rq_proc = msg.rm_call.cb_proc; | |
1357 r.rq_cred = msg.rm_call.cb_cred; | |
1358 | |
1359 /* | |
1360 * First authenticate the message. | |
1361 */ | |
1362 TRACE_0(TR_FAC_KRPC, TR_SVC_GETREQ_AUTH_START, | |
1363 "svc_getreq_auth_start:"); | |
1364 if ((why = sec_svc_msg(&r, &msg, &no_dispatch)) != AUTH_OK) { | |
1365 TRACE_1(TR_FAC_KRPC, TR_SVC_GETREQ_AUTH_END, | |
1366 "svc_getreq_auth_end:(%S)", "failed"); | |
1367 svcerr_auth(clone_xprt, why); | |
1368 /* | |
1369 * Free the arguments. | |
1370 */ | |
1371 (void) SVC_FREEARGS(clone_xprt, NULL, NULL); | |
1372 } else if (no_dispatch) { | |
1373 /* | |
1374 * XXX - when bug id 4053736 is done, remove | |
1375 * the SVC_FREEARGS() call. | |
1376 */ | |
1377 (void) SVC_FREEARGS(clone_xprt, NULL, NULL); | |
1378 } else { | |
1379 TRACE_1(TR_FAC_KRPC, TR_SVC_GETREQ_AUTH_END, | |
1380 "svc_getreq_auth_end:(%S)", "good"); | |
1381 | |
1382 dispatchroutine = svc_callout_find(clone_xprt, | |
1383 r.rq_prog, r.rq_vers, &vers_min, &vers_max); | |
1384 | |
1385 if (dispatchroutine) { | |
1386 (*dispatchroutine) (&r, clone_xprt); | |
1387 } else { | |
1388 /* | |
1389 * If we got here, the program or version | |
1390 * is not served ... | |
1391 */ | |
1392 if (vers_max == 0 || | |
1393 version_keepquiet(clone_xprt)) | |
1394 svcerr_noprog(clone_xprt); | |
1395 else | |
1396 svcerr_progvers(clone_xprt, vers_min, | |
1397 vers_max); | |
1398 | |
1399 /* | |
1400 * Free the arguments. For successful calls | |
1401 * this is done by the dispatch routine. | |
1402 */ | |
1403 (void) SVC_FREEARGS(clone_xprt, NULL, NULL); | |
1404 /* Fall through to ... */ | |
1405 } | |
1406 /* | |
1407 * Call cleanup procedure for RPCSEC_GSS. | |
1408 * This is a hack since there is currently no | |
1409 * op, such as SVC_CLEANAUTH. rpc_gss_cleanup | |
1410 * should only be called for a non null proc. | |
1411 * Null procs in RPC GSS are overloaded to | |
1412 * provide context setup and control. The main | |
1413 * purpose of rpc_gss_cleanup is to decrement the | |
1414 * reference count associated with the cached | |
1415 * GSS security context. We should never get here | |
1416 * for an RPCSEC_GSS null proc since *no_dispatch | |
1417 * would have been set to true from sec_svc_msg above. | |
1418 */ | |
1419 if (r.rq_cred.oa_flavor == RPCSEC_GSS) | |
1420 rpc_gss_cleanup(clone_xprt); | |
1421 } | |
1422 } | |
1423 | |
1676 | 1424 if (r.rq_label != NULL) |
1425 kmem_free(r.rq_label, sizeof (bslabel_t)); | |
1426 | |
0 | 1427 /* |
1428 * Free authentication parameters' storage | |
1429 */ | |
1430 mutex_enter(&rqcred_lock); | |
1431 /* LINTED pointer alignment */ | |
1432 *(caddr_t *)cred_area = rqcred_head; | |
1433 rqcred_head = cred_area; | |
1434 mutex_exit(&rqcred_lock); | |
1435 } | |
1436 | |
1437 /* | |
1438 * Allocate new clone transport handle. | |
1439 */ | |
10721
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
1440 SVCXPRT * |
0 | 1441 svc_clone_init(void) |
1442 { | |
1443 SVCXPRT *clone_xprt; | |
1444 | |
1445 clone_xprt = kmem_zalloc(sizeof (SVCXPRT), KM_SLEEP); | |
1446 clone_xprt->xp_cred = crget(); | |
1447 return (clone_xprt); | |
1448 } | |
1449 | |
1450 /* | |
1451 * Free memory allocated by svc_clone_init. | |
1452 */ | |
10721
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
1453 void |
0 | 1454 svc_clone_free(SVCXPRT *clone_xprt) |
1455 { | |
1456 /* Fre credentials from crget() */ | |
1457 if (clone_xprt->xp_cred) | |
1458 crfree(clone_xprt->xp_cred); | |
1459 kmem_free(clone_xprt, sizeof (SVCXPRT)); | |
1460 } | |
1461 | |
1462 /* | |
1463 * Link a per-thread clone transport handle to a master | |
1464 * - increment a thread reference count on the master | |
1465 * - copy some of the master's fields to the clone | |
1466 * - call a transport specific clone routine. | |
1467 */ | |
10721
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
1468 void |
11967
f91b268929d9
6896473 Server rpcmod panic when client trying to mount rdma/krb5
Karen Rochford <Karen.Rochford@Sun.COM>
parents:
11066
diff
changeset
|
1469 svc_clone_link(SVCMASTERXPRT *xprt, SVCXPRT *clone_xprt, SVCXPRT *clone_xprt2) |
0 | 1470 { |
1471 cred_t *cred = clone_xprt->xp_cred; | |
1472 | |
1473 ASSERT(cred); | |
1474 | |
1475 /* | |
1476 * Bump up master's thread count. | |
1477 * Linking a per-thread clone transport handle to a master | |
1478 * associates a service thread with the master. | |
1479 */ | |
1480 mutex_enter(&xprt->xp_thread_lock); | |
1481 xprt->xp_threads++; | |
1482 mutex_exit(&xprt->xp_thread_lock); | |
1483 | |
1484 /* Clear everything */ | |
1485 bzero(clone_xprt, sizeof (SVCXPRT)); | |
1486 | |
1487 /* Set pointer to the master transport stucture */ | |
1488 clone_xprt->xp_master = xprt; | |
1489 | |
1490 /* Structure copy of all the common fields */ | |
1491 clone_xprt->xp_xpc = xprt->xp_xpc; | |
1492 | |
1493 /* Restore per-thread fields (xp_cred) */ | |
1494 clone_xprt->xp_cred = cred; | |
1495 | |
11967
f91b268929d9
6896473 Server rpcmod panic when client trying to mount rdma/krb5
Karen Rochford <Karen.Rochford@Sun.COM>
parents:
11066
diff
changeset
|
1496 if (clone_xprt2) |
f91b268929d9
6896473 Server rpcmod panic when client trying to mount rdma/krb5
Karen Rochford <Karen.Rochford@Sun.COM>
parents:
11066
diff
changeset
|
1497 SVC_CLONE_XPRT(clone_xprt2, clone_xprt); |
0 | 1498 } |
1499 | |
1500 /* | |
1501 * Unlink a non-detached clone transport handle from a master | |
1502 * - decrement a thread reference count on the master | |
1503 * - if the transport is closing (xp_wq is NULL) call svc_xprt_cleanup(); | |
1504 * if this is the last non-detached/absolute thread on this transport | |
1505 * then it will close/destroy the transport | |
1506 * - call transport specific function to destroy the clone handle | |
1507 * - clear xp_master to avoid recursion. | |
1508 */ | |
10721
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
1509 void |
0 | 1510 svc_clone_unlink(SVCXPRT *clone_xprt) |
1511 { | |
1512 SVCMASTERXPRT *xprt = clone_xprt->xp_master; | |
1513 | |
1514 /* This cannot be a detached thread */ | |
1515 ASSERT(!clone_xprt->xp_detached); | |
1516 ASSERT(xprt->xp_threads > 0); | |
1517 | |
1518 /* Decrement a reference count on the transport */ | |
1519 mutex_enter(&xprt->xp_thread_lock); | |
1520 xprt->xp_threads--; | |
1521 | |
1522 /* svc_xprt_cleanup() unlocks xp_thread_lock or destroys xprt */ | |
1523 if (xprt->xp_wq) | |
1524 mutex_exit(&xprt->xp_thread_lock); | |
1525 else | |
1526 svc_xprt_cleanup(xprt, FALSE); | |
1527 | |
1528 /* Call a transport specific clone `destroy' function */ | |
1529 SVC_CLONE_DESTROY(clone_xprt); | |
1530 | |
1531 /* Clear xp_master */ | |
1532 clone_xprt->xp_master = NULL; | |
1533 } | |
1534 | |
1535 /* | |
1536 * Unlink a detached clone transport handle from a master | |
1537 * - decrement the thread count on the master | |
1538 * - if the transport is closing (xp_wq is NULL) call svc_xprt_cleanup(); | |
1539 * if this is the last thread on this transport then it will destroy | |
1540 * the transport. | |
1541 * - call a transport specific function to destroy the clone handle | |
1542 * - clear xp_master to avoid recursion. | |
1543 */ | |
1544 static void | |
1545 svc_clone_unlinkdetached(SVCXPRT *clone_xprt) | |
1546 { | |
1547 SVCMASTERXPRT *xprt = clone_xprt->xp_master; | |
1548 | |
1549 /* This must be a detached thread */ | |
1550 ASSERT(clone_xprt->xp_detached); | |
1551 ASSERT(xprt->xp_detached_threads > 0); | |
1552 ASSERT(xprt->xp_threads + xprt->xp_detached_threads > 0); | |
1553 | |
1554 /* Grab xprt->xp_thread_lock and decrement link counts */ | |
1555 mutex_enter(&xprt->xp_thread_lock); | |
1556 xprt->xp_detached_threads--; | |
1557 | |
1558 /* svc_xprt_cleanup() unlocks xp_thread_lock or destroys xprt */ | |
1559 if (xprt->xp_wq) | |
1560 mutex_exit(&xprt->xp_thread_lock); | |
1561 else | |
1562 svc_xprt_cleanup(xprt, TRUE); | |
1563 | |
1564 /* Call transport specific clone `destroy' function */ | |
1565 SVC_CLONE_DESTROY(clone_xprt); | |
1566 | |
1567 /* Clear xp_master */ | |
1568 clone_xprt->xp_master = NULL; | |
1569 } | |
1570 | |
1571 /* | |
1572 * Try to exit a non-detached service thread | |
1573 * - check if there are enough threads left | |
1574 * - if this thread (ie its clone transport handle) are linked | |
1575 * to a master transport then unlink it | |
1576 * - free the clone structure | |
1577 * - return to userland for thread exit | |
1578 * | |
1579 * If this is the last non-detached or the last thread on this | |
1580 * transport then the call to svc_clone_unlink() will, respectively, | |
1581 * close and/or destroy the transport. | |
1582 */ | |
1583 static void | |
1584 svc_thread_exit(SVCPOOL *pool, SVCXPRT *clone_xprt) | |
1585 { | |
1586 if (clone_xprt->xp_master) | |
1587 svc_clone_unlink(clone_xprt); | |
1588 svc_clone_free(clone_xprt); | |
1589 | |
1590 mutex_enter(&pool->p_thread_lock); | |
1591 pool->p_threads--; | |
1592 if (pool->p_closing && svc_pool_tryexit(pool)) | |
1593 /* return - thread exit will be handled at user level */ | |
1594 return; | |
1595 mutex_exit(&pool->p_thread_lock); | |
1596 | |
1597 /* return - thread exit will be handled at user level */ | |
1598 } | |
1599 | |
1600 /* | |
1601 * Exit a detached service thread that returned to svc_run | |
1602 * - decrement the `detached thread' count for the pool | |
1603 * - unlink the detached clone transport handle from the master | |
1604 * - free the clone structure | |
1605 * - return to userland for thread exit | |
1606 * | |
1607 * If this is the last thread on this transport then the call | |
1608 * to svc_clone_unlinkdetached() will destroy the transport. | |
1609 */ | |
1610 static void | |
1611 svc_thread_exitdetached(SVCPOOL *pool, SVCXPRT *clone_xprt) | |
1612 { | |
1613 /* This must be a detached thread */ | |
1614 ASSERT(clone_xprt->xp_master); | |
1615 ASSERT(clone_xprt->xp_detached); | |
1616 ASSERT(!MUTEX_HELD(&pool->p_thread_lock)); | |
1617 | |
1618 svc_clone_unlinkdetached(clone_xprt); | |
1619 svc_clone_free(clone_xprt); | |
1620 | |
1621 mutex_enter(&pool->p_thread_lock); | |
1622 | |
1623 ASSERT(pool->p_reserved_threads >= 0); | |
1624 ASSERT(pool->p_detached_threads > 0); | |
1625 | |
1626 pool->p_detached_threads--; | |
1627 if (pool->p_closing && svc_pool_tryexit(pool)) | |
1628 /* return - thread exit will be handled at user level */ | |
1629 return; | |
1630 mutex_exit(&pool->p_thread_lock); | |
1631 | |
1632 /* return - thread exit will be handled at user level */ | |
1633 } | |
1634 | |
1635 /* | |
1636 * PSARC 2003/523 Contract Private Interface | |
1637 * svc_wait | |
1638 * Changes must be reviewed by Solaris File Sharing | |
1639 * Changes must be communicated to contract-2003-523@sun.com | |
1640 */ | |
1641 int | |
1642 svc_wait(int id) | |
1643 { | |
1644 SVCPOOL *pool; | |
1645 int err = 0; | |
1646 struct svc_globals *svc; | |
1647 | |
1648 svc = zone_getspecific(svc_zone_key, curproc->p_zone); | |
1649 mutex_enter(&svc->svc_plock); | |
1650 pool = svc_pool_find(svc, id); | |
1651 mutex_exit(&svc->svc_plock); | |
1652 | |
1653 if (pool == NULL) | |
1654 return (ENOENT); | |
1655 | |
1656 mutex_enter(&pool->p_user_lock); | |
1657 | |
1658 /* Check if there's already a user thread waiting on this pool */ | |
1659 if (pool->p_user_waiting) { | |
1660 mutex_exit(&pool->p_user_lock); | |
1661 return (EBUSY); | |
1662 } | |
1663 | |
1664 pool->p_user_waiting = TRUE; | |
1665 | |
1666 /* Go to sleep, waiting for the signaled flag. */ | |
1667 while (!pool->p_signal_create_thread && !pool->p_user_exit) { | |
1668 if (cv_wait_sig(&pool->p_user_cv, &pool->p_user_lock) == 0) { | |
1669 /* Interrupted, return to handle exit or signal */ | |
1670 pool->p_user_waiting = FALSE; | |
1671 pool->p_signal_create_thread = FALSE; | |
1672 mutex_exit(&pool->p_user_lock); | |
1673 | |
1674 /* | |
1675 * Thread has been interrupted and therefore | |
1676 * the service daemon is leaving as well so | |
1677 * let's go ahead and remove the service | |
1678 * pool at this time. | |
1679 */ | |
1680 mutex_enter(&svc->svc_plock); | |
1681 svc_pool_unregister(svc, pool); | |
1682 mutex_exit(&svc->svc_plock); | |
1683 | |
1684 return (EINTR); | |
1685 } | |
1686 } | |
1687 | |
1688 pool->p_signal_create_thread = FALSE; | |
1689 pool->p_user_waiting = FALSE; | |
1690 | |
1691 /* | |
1692 * About to exit the service pool. Set return value | |
1693 * to let the userland code know our intent. Signal | |
1694 * svc_thread_creator() so that it can clean up the | |
1695 * pool structure. | |
1696 */ | |
1697 if (pool->p_user_exit) { | |
1698 err = ECANCELED; | |
1699 cv_signal(&pool->p_user_cv); | |
1700 } | |
1701 | |
1702 mutex_exit(&pool->p_user_lock); | |
1703 | |
1704 /* Return to userland with error code, for possible thread creation. */ | |
1705 return (err); | |
1706 } | |
1707 | |
1708 /* | |
1709 * `Service threads' creator thread. | |
1710 * The creator thread waits for a signal to create new thread. | |
1711 */ | |
1712 static void | |
1713 svc_thread_creator(SVCPOOL *pool) | |
1714 { | |
1715 callb_cpr_t cpr_info; /* CPR info for the creator thread */ | |
1716 | |
1717 CALLB_CPR_INIT(&cpr_info, &pool->p_creator_lock, callb_generic_cpr, | |
1718 "svc_thread_creator"); | |
1719 | |
1720 for (;;) { | |
1721 mutex_enter(&pool->p_creator_lock); | |
1722 | |
1723 /* Check if someone set the exit flag */ | |
1724 if (pool->p_creator_exit) | |
1725 break; | |
1726 | |
1727 /* Clear the `signaled' flag and go asleep */ | |
1728 pool->p_creator_signaled = FALSE; | |
1729 | |
1730 CALLB_CPR_SAFE_BEGIN(&cpr_info); | |
1731 cv_wait(&pool->p_creator_cv, &pool->p_creator_lock); | |
1732 CALLB_CPR_SAFE_END(&cpr_info, &pool->p_creator_lock); | |
1733 | |
1734 /* Check if someone signaled to exit */ | |
1735 if (pool->p_creator_exit) | |
1736 break; | |
1737 | |
1738 mutex_exit(&pool->p_creator_lock); | |
1739 | |
1740 mutex_enter(&pool->p_thread_lock); | |
1741 | |
1742 /* | |
1743 * When the pool is in closing state and all the transports | |
1744 * are gone the creator should not create any new threads. | |
1745 */ | |
1746 if (pool->p_closing) { | |
1747 rw_enter(&pool->p_lrwlock, RW_READER); | |
1748 if (pool->p_lcount == 0) { | |
1749 rw_exit(&pool->p_lrwlock); | |
1750 mutex_exit(&pool->p_thread_lock); | |
1751 continue; | |
1752 } | |
1753 rw_exit(&pool->p_lrwlock); | |
1754 } | |
1755 | |
1756 /* | |
1757 * Create a new service thread now. | |
1758 */ | |
1759 ASSERT(pool->p_reserved_threads >= 0); | |
1760 ASSERT(pool->p_detached_threads >= 0); | |
1761 | |
1762 if (pool->p_threads + pool->p_detached_threads < | |
1763 pool->p_maxthreads) { | |
1764 /* | |
1765 * Signal the service pool wait thread | |
1766 * only if it hasn't already been signaled. | |
1767 */ | |
1768 mutex_enter(&pool->p_user_lock); | |
1769 if (pool->p_signal_create_thread == FALSE) { | |
1770 pool->p_signal_create_thread = TRUE; | |
1771 cv_signal(&pool->p_user_cv); | |
1772 } | |
1773 mutex_exit(&pool->p_user_lock); | |
1774 | |
1775 } | |
1776 | |
1777 mutex_exit(&pool->p_thread_lock); | |
1778 } | |
1779 | |
1780 /* | |
1781 * Pool is closed. Cleanup and exit. | |
1782 */ | |
1783 | |
1784 /* Signal userland creator thread that it can stop now. */ | |
1785 mutex_enter(&pool->p_user_lock); | |
1786 pool->p_user_exit = TRUE; | |
1787 cv_broadcast(&pool->p_user_cv); | |
1788 mutex_exit(&pool->p_user_lock); | |
1789 | |
1790 /* Wait for svc_wait() to be done with the pool */ | |
1791 mutex_enter(&pool->p_user_lock); | |
1792 while (pool->p_user_waiting) { | |
1793 CALLB_CPR_SAFE_BEGIN(&cpr_info); | |
1794 cv_wait(&pool->p_user_cv, &pool->p_user_lock); | |
1795 CALLB_CPR_SAFE_END(&cpr_info, &pool->p_creator_lock); | |
1796 } | |
1797 mutex_exit(&pool->p_user_lock); | |
1798 | |
1799 CALLB_CPR_EXIT(&cpr_info); | |
1800 svc_pool_cleanup(pool); | |
1801 zthread_exit(); | |
1802 } | |
1803 | |
1804 /* | |
1805 * If the creator thread is idle signal it to create | |
1806 * a new service thread. | |
1807 */ | |
1808 static void | |
1809 svc_creator_signal(SVCPOOL *pool) | |
1810 { | |
1811 mutex_enter(&pool->p_creator_lock); | |
1812 if (pool->p_creator_signaled == FALSE) { | |
1813 pool->p_creator_signaled = TRUE; | |
1814 cv_signal(&pool->p_creator_cv); | |
1815 } | |
1816 mutex_exit(&pool->p_creator_lock); | |
1817 } | |
1818 | |
1819 /* | |
1820 * Notify the creator thread to clean up and exit. | |
1821 */ | |
1822 static void | |
1823 svc_creator_signalexit(SVCPOOL *pool) | |
1824 { | |
1825 mutex_enter(&pool->p_creator_lock); | |
1826 pool->p_creator_exit = TRUE; | |
1827 cv_signal(&pool->p_creator_cv); | |
1828 mutex_exit(&pool->p_creator_lock); | |
1829 } | |
1830 | |
1831 /* | |
1832 * Polling part of the svc_run(). | |
1833 * - search for a transport with a pending request | |
1834 * - when one is found then latch the request lock and return to svc_run() | |
1835 * - if there is no request go asleep and wait for a signal | |
1836 * - handle two exceptions: | |
1837 * a) current transport is closing | |
1838 * b) timeout waiting for a new request | |
1839 * in both cases return to svc_run() | |
1840 */ | |
1841 static SVCMASTERXPRT * | |
1842 svc_poll(SVCPOOL *pool, SVCMASTERXPRT *xprt, SVCXPRT *clone_xprt) | |
1843 { | |
1844 /* | |
1845 * Main loop iterates until | |
1846 * a) we find a pending request, | |
1847 * b) detect that the current transport is closing | |
1848 * c) time out waiting for a new request. | |
1849 */ | |
1850 for (;;) { | |
1851 SVCMASTERXPRT *next; | |
1852 clock_t timeleft; | |
1853 | |
1854 /* | |
1855 * Step 1. | |
1856 * Check if there is a pending request on the current | |
1857 * transport handle so that we can avoid cloning. | |
1858 * If so then decrement the `pending-request' count for | |
1859 * the pool and return to svc_run(). | |
1860 * | |
1861 * We need to prevent a potential starvation. When | |
1862 * a selected transport has all pending requests coming in | |
1863 * all the time then the service threads will never switch to | |
1864 * another transport. With a limited number of service | |
1865 * threads some transports may be never serviced. | |
1866 * To prevent such a scenario we pick up at most | |
1867 * pool->p_max_same_xprt requests from the same transport | |
1868 * and then take a hint from the xprt-ready queue or walk | |
1869 * the transport list. | |
1870 */ | |
1871 if (xprt && xprt->xp_req_head && (!pool->p_qoverflow || | |
1872 clone_xprt->xp_same_xprt++ < pool->p_max_same_xprt)) { | |
1873 mutex_enter(&xprt->xp_req_lock); | |
1874 if (xprt->xp_req_head) { | |
1875 mutex_enter(&pool->p_req_lock); | |
1876 pool->p_reqs--; | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
1877 if (pool->p_reqs == 0) |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
1878 pool->p_qoverflow = FALSE; |
0 | 1879 mutex_exit(&pool->p_req_lock); |
1880 | |
1881 return (xprt); | |
1882 } | |
1883 mutex_exit(&xprt->xp_req_lock); | |
1884 } | |
1885 clone_xprt->xp_same_xprt = 0; | |
1886 | |
1887 /* | |
1888 * Step 2. | |
1889 * If there is no request on the current transport try to | |
1890 * find another transport with a pending request. | |
1891 */ | |
1892 mutex_enter(&pool->p_req_lock); | |
1893 pool->p_walkers++; | |
1894 mutex_exit(&pool->p_req_lock); | |
1895 | |
1896 /* | |
1897 * Make sure that transports will not be destroyed just | |
1898 * while we are checking them. | |
1899 */ | |
1900 rw_enter(&pool->p_lrwlock, RW_READER); | |
1901 | |
1902 for (;;) { | |
1903 SVCMASTERXPRT *hint; | |
1904 | |
1905 /* | |
1906 * Get the next transport from the xprt-ready queue. | |
1907 * This is a hint. There is no guarantee that the | |
1908 * transport still has a pending request since it | |
1909 * could be picked up by another thread in step 1. | |
1910 * | |
1911 * If the transport has a pending request then keep | |
1912 * it locked. Decrement the `pending-requests' for | |
1913 * the pool and `walking-threads' counts, and return | |
1914 * to svc_run(). | |
1915 */ | |
1916 hint = svc_xprt_qget(pool); | |
1917 | |
1918 if (hint && hint->xp_req_head) { | |
1919 mutex_enter(&hint->xp_req_lock); | |
1920 if (hint->xp_req_head) { | |
1921 rw_exit(&pool->p_lrwlock); | |
1922 | |
1923 mutex_enter(&pool->p_req_lock); | |
1924 pool->p_reqs--; | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
1925 if (pool->p_reqs == 0) |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
1926 pool->p_qoverflow = FALSE; |
0 | 1927 pool->p_walkers--; |
1928 mutex_exit(&pool->p_req_lock); | |
1929 | |
1930 return (hint); | |
1931 } | |
1932 mutex_exit(&hint->xp_req_lock); | |
1933 } | |
1934 | |
1935 /* | |
1936 * If there was no hint in the xprt-ready queue then | |
1937 * - if there is less pending requests than polling | |
1938 * threads go asleep | |
1939 * - otherwise check if there was an overflow in the | |
1940 * xprt-ready queue; if so, then we need to break | |
1941 * the `drain' mode | |
1942 */ | |
1943 if (hint == NULL) { | |
1944 if (pool->p_reqs < pool->p_walkers) { | |
1945 mutex_enter(&pool->p_req_lock); | |
1946 if (pool->p_reqs < pool->p_walkers) | |
1947 goto sleep; | |
1948 mutex_exit(&pool->p_req_lock); | |
1949 } | |
1950 if (pool->p_qoverflow) { | |
1951 break; | |
1952 } | |
1953 } | |
1954 } | |
1955 | |
1956 /* | |
1957 * If there was an overflow in the xprt-ready queue then we | |
1958 * need to switch to the `drain' mode, i.e. walk through the | |
1959 * pool's transport list and search for a transport with a | |
1960 * pending request. If we manage to drain all the pending | |
1961 * requests then we can clear the overflow flag. This will | |
1962 * switch svc_poll() back to taking hints from the xprt-ready | |
1963 * queue (which is generally more efficient). | |
1964 * | |
1965 * If there are no registered transports simply go asleep. | |
1966 */ | |
1967 if (xprt == NULL && pool->p_lhead == NULL) { | |
1968 mutex_enter(&pool->p_req_lock); | |
1969 goto sleep; | |
1970 } | |
1971 | |
1972 /* | |
1973 * `Walk' through the pool's list of master server | |
1974 * transport handles. Continue to loop until there are less | |
1975 * looping threads then pending requests. | |
1976 */ | |
1977 next = xprt ? xprt->xp_next : pool->p_lhead; | |
1978 | |
1979 for (;;) { | |
1980 /* | |
1981 * Check if there is a request on this transport. | |
1982 * | |
1983 * Since blocking on a locked mutex is very expensive | |
1984 * check for a request without a lock first. If we miss | |
1985 * a request that is just being delivered but this will | |
1986 * cost at most one full walk through the list. | |
1987 */ | |
1988 if (next->xp_req_head) { | |
1989 /* | |
1990 * Check again, now with a lock. | |
1991 */ | |
1992 mutex_enter(&next->xp_req_lock); | |
1993 if (next->xp_req_head) { | |
1994 rw_exit(&pool->p_lrwlock); | |
1995 | |
1996 mutex_enter(&pool->p_req_lock); | |
1997 pool->p_reqs--; | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
1998 if (pool->p_reqs == 0) |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
1999 pool->p_qoverflow = FALSE; |
0 | 2000 pool->p_walkers--; |
2001 mutex_exit(&pool->p_req_lock); | |
2002 | |
2003 return (next); | |
2004 } | |
2005 mutex_exit(&next->xp_req_lock); | |
2006 } | |
2007 | |
2008 /* | |
2009 * Continue to `walk' through the pool's | |
2010 * transport list until there is less requests | |
2011 * than walkers. Check this condition without | |
2012 * a lock first to avoid contention on a mutex. | |
2013 */ | |
2014 if (pool->p_reqs < pool->p_walkers) { | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2015 /* Check again, now with the lock. */ |
0 | 2016 mutex_enter(&pool->p_req_lock); |
2017 if (pool->p_reqs < pool->p_walkers) | |
2018 break; /* goto sleep */ | |
2019 mutex_exit(&pool->p_req_lock); | |
2020 } | |
2021 | |
2022 next = next->xp_next; | |
2023 } | |
2024 | |
2025 sleep: | |
2026 /* | |
2027 * No work to do. Stop the `walk' and go asleep. | |
2028 * Decrement the `walking-threads' count for the pool. | |
2029 */ | |
2030 pool->p_walkers--; | |
2031 rw_exit(&pool->p_lrwlock); | |
2032 | |
2033 /* | |
2034 * Count us as asleep, mark this thread as safe | |
2035 * for suspend and wait for a request. | |
2036 */ | |
2037 pool->p_asleep++; | |
11066
cebb50cbe4f9
PSARC/2009/396 Tickless Kernel Architecture / lbolt decoupling
Rafael Vanoni <rafael.vanoni@sun.com>
parents:
10721
diff
changeset
|
2038 timeleft = cv_reltimedwait_sig(&pool->p_req_cv, |
cebb50cbe4f9
PSARC/2009/396 Tickless Kernel Architecture / lbolt decoupling
Rafael Vanoni <rafael.vanoni@sun.com>
parents:
10721
diff
changeset
|
2039 &pool->p_req_lock, pool->p_timeout, TR_CLOCK_TICK); |
0 | 2040 |
2041 /* | |
2042 * If the drowsy flag is on this means that | |
2043 * someone has signaled a wakeup. In such a case | |
2044 * the `asleep-threads' count has already updated | |
2045 * so just clear the flag. | |
2046 * | |
2047 * If the drowsy flag is off then we need to update | |
2048 * the `asleep-threads' count. | |
2049 */ | |
2050 if (pool->p_drowsy) { | |
2051 pool->p_drowsy = FALSE; | |
2052 /* | |
2053 * If the thread is here because it timedout, | |
2054 * instead of returning SVC_ETIMEDOUT, it is | |
2055 * time to do some more work. | |
2056 */ | |
2057 if (timeleft == -1) | |
2058 timeleft = 1; | |
2059 } else { | |
2060 pool->p_asleep--; | |
2061 } | |
2062 mutex_exit(&pool->p_req_lock); | |
2063 | |
2064 /* | |
2065 * If we received a signal while waiting for a | |
2066 * request, inform svc_run(), so that we can return | |
8139
ce46cce975a2
6761285 svc_run() should not unregister the pool
Vallish Vaidyeshwara <Vallish.Vaidyeshwara@Sun.COM>
parents:
6786
diff
changeset
|
2067 * to user level and exit. |
0 | 2068 */ |
2069 if (timeleft == 0) | |
2070 return (SVC_EINTR); | |
2071 | |
2072 /* | |
2073 * If the current transport is gone then notify | |
2074 * svc_run() to unlink from it. | |
2075 */ | |
2076 if (xprt && xprt->xp_wq == NULL) | |
2077 return (SVC_EXPRTGONE); | |
2078 | |
2079 /* | |
2080 * If we have timed out waiting for a request inform | |
2081 * svc_run() that we probably don't need this thread. | |
2082 */ | |
2083 if (timeleft == -1) | |
2084 return (SVC_ETIMEDOUT); | |
2085 } | |
2086 } | |
2087 | |
2088 /* | |
2089 * Main loop of the kernel RPC server | |
2090 * - wait for input (find a transport with a pending request). | |
2091 * - dequeue the request | |
2092 * - call a registered server routine to process the requests | |
2093 * | |
2094 * There can many threads running concurrently in this loop | |
2095 * on the same or on different transports. | |
2096 */ | |
2097 static int | |
2098 svc_run(SVCPOOL *pool) | |
2099 { | |
2100 SVCMASTERXPRT *xprt = NULL; /* master transport handle */ | |
2101 SVCXPRT *clone_xprt; /* clone for this thread */ | |
2102 proc_t *p = ttoproc(curthread); | |
2103 | |
2104 /* Allocate a clone transport handle for this thread */ | |
2105 clone_xprt = svc_clone_init(); | |
2106 | |
2107 /* | |
2108 * The loop iterates until the thread becomes | |
2109 * idle too long or the transport is gone. | |
2110 */ | |
2111 for (;;) { | |
2112 SVCMASTERXPRT *next; | |
2113 mblk_t *mp; | |
2114 | |
2115 TRACE_0(TR_FAC_KRPC, TR_SVC_RUN, "svc_run"); | |
2116 | |
2117 /* | |
2118 * If the process is exiting/killed, return | |
2119 * immediately without processing any more | |
2120 * requests. | |
2121 */ | |
390 | 2122 if (p->p_flag & (SEXITING | SKILLED)) { |
0 | 2123 svc_thread_exit(pool, clone_xprt); |
8139
ce46cce975a2
6761285 svc_run() should not unregister the pool
Vallish Vaidyeshwara <Vallish.Vaidyeshwara@Sun.COM>
parents:
6786
diff
changeset
|
2124 return (EINTR); |
0 | 2125 } |
2126 | |
2127 /* Find a transport with a pending request */ | |
2128 next = svc_poll(pool, xprt, clone_xprt); | |
2129 | |
2130 /* | |
2131 * If svc_poll() finds a transport with a request | |
2132 * it latches xp_req_lock on it. Therefore we need | |
2133 * to dequeue the request and release the lock as | |
2134 * soon as possible. | |
2135 */ | |
2136 ASSERT(next != NULL && | |
2137 (next == SVC_EXPRTGONE || | |
2138 next == SVC_ETIMEDOUT || | |
2139 next == SVC_EINTR || | |
2140 MUTEX_HELD(&next->xp_req_lock))); | |
2141 | |
2142 /* Ooops! Current transport is closing. Unlink now */ | |
2143 if (next == SVC_EXPRTGONE) { | |
2144 svc_clone_unlink(clone_xprt); | |
2145 xprt = NULL; | |
2146 continue; | |
2147 } | |
2148 | |
2149 /* Ooops! Timeout while waiting for a request. Exit */ | |
2150 if (next == SVC_ETIMEDOUT) { | |
2151 svc_thread_exit(pool, clone_xprt); | |
2152 return (0); | |
2153 } | |
2154 | |
2155 /* | |
2156 * Interrupted by a signal while waiting for a | |
8139
ce46cce975a2
6761285 svc_run() should not unregister the pool
Vallish Vaidyeshwara <Vallish.Vaidyeshwara@Sun.COM>
parents:
6786
diff
changeset
|
2157 * request. Return to userspace and exit. |
0 | 2158 */ |
2159 if (next == SVC_EINTR) { | |
2160 svc_thread_exit(pool, clone_xprt); | |
2161 return (EINTR); | |
2162 } | |
2163 | |
2164 /* | |
2165 * De-queue the request and release the request lock | |
2166 * on this transport (latched by svc_poll()). | |
2167 */ | |
2168 mp = next->xp_req_head; | |
2169 next->xp_req_head = mp->b_next; | |
2170 mp->b_next = (mblk_t *)0; | |
2171 | |
2172 TRACE_2(TR_FAC_KRPC, TR_NFSFP_QUE_REQ_DEQ, | |
2173 "rpc_que_req_deq:pool %p mp %p", pool, mp); | |
2174 mutex_exit(&next->xp_req_lock); | |
2175 | |
2176 /* | |
2177 * If this is a new request on a current transport then | |
2178 * the clone structure is already properly initialized. | |
2179 * Otherwise, if the request is on a different transport, | |
2180 * unlink from the current master and link to | |
2181 * the one we got a request on. | |
2182 */ | |
2183 if (next != xprt) { | |
2184 if (xprt) | |
2185 svc_clone_unlink(clone_xprt); | |
11967
f91b268929d9
6896473 Server rpcmod panic when client trying to mount rdma/krb5
Karen Rochford <Karen.Rochford@Sun.COM>
parents:
11066
diff
changeset
|
2186 svc_clone_link(next, clone_xprt, NULL); |
0 | 2187 xprt = next; |
2188 } | |
2189 | |
2190 /* | |
2191 * If there are more requests and req_cv hasn't | |
2192 * been signaled yet then wake up one more thread now. | |
2193 * | |
2194 * We avoid signaling req_cv until the most recently | |
2195 * signaled thread wakes up and gets CPU to clear | |
2196 * the `drowsy' flag. | |
2197 */ | |
2198 if (!(pool->p_drowsy || pool->p_reqs <= pool->p_walkers || | |
2199 pool->p_asleep == 0)) { | |
2200 mutex_enter(&pool->p_req_lock); | |
2201 | |
2202 if (pool->p_drowsy || pool->p_reqs <= pool->p_walkers || | |
2203 pool->p_asleep == 0) | |
2204 mutex_exit(&pool->p_req_lock); | |
2205 else { | |
2206 pool->p_asleep--; | |
2207 pool->p_drowsy = TRUE; | |
2208 | |
2209 cv_signal(&pool->p_req_cv); | |
2210 mutex_exit(&pool->p_req_lock); | |
2211 } | |
2212 } | |
2213 | |
2214 /* | |
2215 * If there are no asleep/signaled threads, we are | |
2216 * still below pool->p_maxthreads limit, and no thread is | |
2217 * currently being created then signal the creator | |
2218 * for one more service thread. | |
2219 * | |
2220 * The asleep and drowsy checks are not protected | |
2221 * by a lock since it hurts performance and a wrong | |
2222 * decision is not essential. | |
2223 */ | |
2224 if (pool->p_asleep == 0 && !pool->p_drowsy && | |
2225 pool->p_threads + pool->p_detached_threads < | |
2226 pool->p_maxthreads) | |
2227 svc_creator_signal(pool); | |
2228 | |
2229 /* | |
2230 * Process the request. | |
2231 */ | |
2232 svc_getreq(clone_xprt, mp); | |
2233 | |
2234 /* If thread had a reservation it should have been canceled */ | |
2235 ASSERT(!clone_xprt->xp_reserved); | |
2236 | |
2237 /* | |
2238 * If the clone is marked detached then exit. | |
2239 * The rpcmod slot has already been released | |
2240 * when we detached this thread. | |
2241 */ | |
2242 if (clone_xprt->xp_detached) { | |
2243 svc_thread_exitdetached(pool, clone_xprt); | |
2244 return (0); | |
2245 } | |
2246 | |
2247 /* | |
2248 * Release our reference on the rpcmod | |
2249 * slot attached to xp_wq->q_ptr. | |
2250 */ | |
2251 (*RELE_PROC(xprt)) (clone_xprt->xp_wq, NULL); | |
2252 } | |
2253 /* NOTREACHED */ | |
2254 } | |
2255 | |
2256 /* | |
2257 * Flush any pending requests for the queue and | |
2258 * and free the associated mblks. | |
2259 */ | |
2260 void | |
2261 svc_queueclean(queue_t *q) | |
2262 { | |
2263 SVCMASTERXPRT *xprt = ((void **) q->q_ptr)[0]; | |
2264 mblk_t *mp; | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2265 SVCPOOL *pool; |
0 | 2266 |
2267 /* | |
2268 * clean up the requests | |
2269 */ | |
2270 mutex_enter(&xprt->xp_req_lock); | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2271 pool = xprt->xp_pool; |
0 | 2272 while ((mp = xprt->xp_req_head) != NULL) { |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2273 /* remove the request from the list and decrement p_reqs */ |
0 | 2274 xprt->xp_req_head = mp->b_next; |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2275 mutex_enter(&pool->p_req_lock); |
0 | 2276 mp->b_next = (mblk_t *)0; |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2277 pool->p_reqs--; |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2278 mutex_exit(&pool->p_req_lock); |
0 | 2279 (*RELE_PROC(xprt)) (xprt->xp_wq, mp); |
2280 } | |
2281 mutex_exit(&xprt->xp_req_lock); | |
2282 } | |
2283 | |
2284 /* | |
2285 * This routine is called by rpcmod to inform kernel RPC that a | |
2286 * queue is closing. It is called after all the requests have been | |
2287 * picked up (that is after all the slots on the queue have | |
2288 * been released by kernel RPC). It is also guaranteed that no more | |
2289 * request will be delivered on this transport. | |
2290 * | |
2291 * - clear xp_wq to mark the master server transport handle as closing | |
2292 * - if there are no more threads on this transport close/destroy it | |
2293 * - otherwise, broadcast threads sleeping in svc_poll(); the last | |
2294 * thread will close/destroy the transport. | |
2295 */ | |
2296 void | |
2297 svc_queueclose(queue_t *q) | |
2298 { | |
2299 SVCMASTERXPRT *xprt = ((void **) q->q_ptr)[0]; | |
2300 | |
2301 if (xprt == NULL) { | |
2302 /* | |
2303 * If there is no master xprt associated with this stream, | |
2304 * then there is nothing to do. This happens regularly | |
2305 * with connection-oriented listening streams created by | |
2306 * nfsd. | |
2307 */ | |
2308 return; | |
2309 } | |
2310 | |
2311 mutex_enter(&xprt->xp_thread_lock); | |
2312 | |
2313 ASSERT(xprt->xp_req_head == NULL); | |
2314 ASSERT(xprt->xp_wq != NULL); | |
2315 | |
2316 xprt->xp_wq = NULL; | |
2317 | |
2318 if (xprt->xp_threads == 0) { | |
2319 SVCPOOL *pool = xprt->xp_pool; | |
2320 | |
2321 /* | |
2322 * svc_xprt_cleanup() destroys the transport | |
2323 * or releases the transport thread lock | |
2324 */ | |
2325 svc_xprt_cleanup(xprt, FALSE); | |
2326 | |
2327 mutex_enter(&pool->p_thread_lock); | |
2328 | |
2329 /* | |
2330 * If the pool is in closing state and this was | |
2331 * the last transport in the pool then signal the creator | |
2332 * thread to clean up and exit. | |
2333 */ | |
2334 if (pool->p_closing && svc_pool_tryexit(pool)) { | |
2335 return; | |
2336 } | |
2337 mutex_exit(&pool->p_thread_lock); | |
2338 } else { | |
2339 /* | |
2340 * Wakeup threads sleeping in svc_poll() so that they | |
2341 * unlink from the transport | |
2342 */ | |
2343 mutex_enter(&xprt->xp_pool->p_req_lock); | |
2344 cv_broadcast(&xprt->xp_pool->p_req_cv); | |
2345 mutex_exit(&xprt->xp_pool->p_req_lock); | |
2346 | |
2347 /* | |
2348 * NOTICE: No references to the master transport structure | |
2349 * beyond this point! | |
2350 */ | |
2351 mutex_exit(&xprt->xp_thread_lock); | |
2352 } | |
2353 } | |
2354 | |
2355 /* | |
2356 * Interrupt `request delivery' routine called from rpcmod | |
2357 * - put a request at the tail of the transport request queue | |
2358 * - insert a hint for svc_poll() into the xprt-ready queue | |
2359 * - increment the `pending-requests' count for the pool | |
2360 * - wake up a thread sleeping in svc_poll() if necessary | |
2361 * - if all the threads are running ask the creator for a new one. | |
2362 */ | |
2363 void | |
2364 svc_queuereq(queue_t *q, mblk_t *mp) | |
2365 { | |
2366 SVCMASTERXPRT *xprt = ((void **) q->q_ptr)[0]; | |
2367 SVCPOOL *pool = xprt->xp_pool; | |
2368 | |
2369 TRACE_0(TR_FAC_KRPC, TR_SVC_QUEUEREQ_START, "svc_queuereq_start"); | |
2370 | |
8778
b4169d2ab299
PSARC 2007/670 db_credp update
Erik Nordmark <Erik.Nordmark@Sun.COM>
parents:
8695
diff
changeset
|
2371 ASSERT(!is_system_labeled() || msg_getcred(mp, NULL) != NULL || |
1676 | 2372 mp->b_datap->db_type != M_DATA); |
2373 | |
0 | 2374 /* |
2375 * Step 1. | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2376 * Grab the transport's request lock and the |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2377 * pool's request lock so that when we put |
0 | 2378 * the request at the tail of the transport's |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2379 * request queue, possibly put the request on |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2380 * the xprt ready queue and increment the |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2381 * pending request count it looks atomic. |
0 | 2382 */ |
2383 mutex_enter(&xprt->xp_req_lock); | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2384 mutex_enter(&pool->p_req_lock); |
0 | 2385 if (xprt->xp_req_head == NULL) |
2386 xprt->xp_req_head = mp; | |
2387 else | |
2388 xprt->xp_req_tail->b_next = mp; | |
2389 xprt->xp_req_tail = mp; | |
2390 | |
2391 /* | |
2392 * Step 2. | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2393 * Insert a hint into the xprt-ready queue, increment |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2394 * `pending-requests' count for the pool, and wake up |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2395 * a thread sleeping in svc_poll() if necessary. |
0 | 2396 */ |
2397 | |
2398 /* Insert pointer to this transport into the xprt-ready queue */ | |
2399 svc_xprt_qput(pool, xprt); | |
2400 | |
2401 /* Increment the `pending-requests' count for the pool */ | |
2402 pool->p_reqs++; | |
2403 | |
2404 TRACE_2(TR_FAC_KRPC, TR_NFSFP_QUE_REQ_ENQ, | |
2405 "rpc_que_req_enq:pool %p mp %p", pool, mp); | |
2406 | |
2407 /* | |
2408 * If there are more requests and req_cv hasn't | |
2409 * been signaled yet then wake up one more thread now. | |
2410 * | |
2411 * We avoid signaling req_cv until the most recently | |
2412 * signaled thread wakes up and gets CPU to clear | |
2413 * the `drowsy' flag. | |
2414 */ | |
2415 if (pool->p_drowsy || pool->p_reqs <= pool->p_walkers || | |
2416 pool->p_asleep == 0) { | |
2417 mutex_exit(&pool->p_req_lock); | |
2418 } else { | |
2419 pool->p_drowsy = TRUE; | |
2420 pool->p_asleep--; | |
2421 | |
2422 /* | |
2423 * Signal wakeup and drop the request lock. | |
2424 */ | |
2425 cv_signal(&pool->p_req_cv); | |
2426 mutex_exit(&pool->p_req_lock); | |
2427 } | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2428 mutex_exit(&xprt->xp_req_lock); |
0 | 2429 |
2430 /* | |
2431 * Step 3. | |
2432 * If there are no asleep/signaled threads, we are | |
2433 * still below pool->p_maxthreads limit, and no thread is | |
2434 * currently being created then signal the creator | |
2435 * for one more service thread. | |
2436 * | |
2437 * The asleep and drowsy checks are not not protected | |
2438 * by a lock since it hurts performance and a wrong | |
2439 * decision is not essential. | |
2440 */ | |
2441 if (pool->p_asleep == 0 && !pool->p_drowsy && | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2442 pool->p_threads + pool->p_detached_threads < pool->p_maxthreads) |
0 | 2443 svc_creator_signal(pool); |
2444 | |
2445 TRACE_1(TR_FAC_KRPC, TR_SVC_QUEUEREQ_END, | |
2446 "svc_queuereq_end:(%S)", "end"); | |
2447 } | |
2448 | |
2449 /* | |
2450 * Reserve a service thread so that it can be detached later. | |
2451 * This reservation is required to make sure that when it tries to | |
2452 * detach itself the total number of detached threads does not exceed | |
2453 * pool->p_maxthreads - pool->p_redline (i.e. that we can have | |
2454 * up to pool->p_redline non-detached threads). | |
2455 * | |
2456 * If the thread does not detach itself later, it should cancel the | |
2457 * reservation before returning to svc_run(). | |
2458 * | |
2459 * - check if there is room for more reserved/detached threads | |
2460 * - if so, then increment the `reserved threads' count for the pool | |
2461 * - mark the thread as reserved (setting the flag in the clone transport | |
2462 * handle for this thread | |
2463 * - returns 1 if the reservation succeeded, 0 if it failed. | |
2464 */ | |
2465 int | |
2466 svc_reserve_thread(SVCXPRT *clone_xprt) | |
2467 { | |
2468 SVCPOOL *pool = clone_xprt->xp_master->xp_pool; | |
2469 | |
2470 /* Recursive reservations are not allowed */ | |
2471 ASSERT(!clone_xprt->xp_reserved); | |
2472 ASSERT(!clone_xprt->xp_detached); | |
2473 | |
2474 /* Check pool counts if there is room for reservation */ | |
2475 mutex_enter(&pool->p_thread_lock); | |
2476 if (pool->p_reserved_threads + pool->p_detached_threads >= | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2477 pool->p_maxthreads - pool->p_redline) { |
0 | 2478 mutex_exit(&pool->p_thread_lock); |
2479 return (0); | |
2480 } | |
2481 pool->p_reserved_threads++; | |
2482 mutex_exit(&pool->p_thread_lock); | |
2483 | |
2484 /* Mark the thread (clone handle) as reserved */ | |
2485 clone_xprt->xp_reserved = TRUE; | |
2486 | |
2487 return (1); | |
2488 } | |
2489 | |
2490 /* | |
2491 * Cancel a reservation for a thread. | |
2492 * - decrement the `reserved threads' count for the pool | |
2493 * - clear the flag in the clone transport handle for this thread. | |
2494 */ | |
2495 void | |
2496 svc_unreserve_thread(SVCXPRT *clone_xprt) | |
2497 { | |
2498 SVCPOOL *pool = clone_xprt->xp_master->xp_pool; | |
2499 | |
2500 /* Thread must have a reservation */ | |
2501 ASSERT(clone_xprt->xp_reserved); | |
2502 ASSERT(!clone_xprt->xp_detached); | |
2503 | |
2504 /* Decrement global count */ | |
2505 mutex_enter(&pool->p_thread_lock); | |
2506 pool->p_reserved_threads--; | |
2507 mutex_exit(&pool->p_thread_lock); | |
2508 | |
2509 /* Clear reservation flag */ | |
2510 clone_xprt->xp_reserved = FALSE; | |
2511 } | |
2512 | |
2513 /* | |
2514 * Detach a thread from its transport, so that it can block for an | |
2515 * extended time. Because the transport can be closed after the thread is | |
2516 * detached, the thread should have already sent off a reply if it was | |
2517 * going to send one. | |
2518 * | |
2519 * - decrement `non-detached threads' count and increment `detached threads' | |
2520 * counts for the transport | |
2521 * - decrement the `non-detached threads' and `reserved threads' | |
2522 * counts and increment the `detached threads' count for the pool | |
2523 * - release the rpcmod slot | |
2524 * - mark the clone (thread) as detached. | |
2525 * | |
2526 * No need to return a pointer to the thread's CPR information, since | |
2527 * the thread has a userland identity. | |
2528 * | |
2529 * NOTICE: a thread must not detach itself without making a prior reservation | |
2530 * through svc_thread_reserve(). | |
2531 */ | |
2532 callb_cpr_t * | |
2533 svc_detach_thread(SVCXPRT *clone_xprt) | |
2534 { | |
2535 SVCMASTERXPRT *xprt = clone_xprt->xp_master; | |
2536 SVCPOOL *pool = xprt->xp_pool; | |
2537 | |
2538 /* Thread must have a reservation */ | |
2539 ASSERT(clone_xprt->xp_reserved); | |
2540 ASSERT(!clone_xprt->xp_detached); | |
2541 | |
2542 /* Bookkeeping for this transport */ | |
2543 mutex_enter(&xprt->xp_thread_lock); | |
2544 xprt->xp_threads--; | |
2545 xprt->xp_detached_threads++; | |
2546 mutex_exit(&xprt->xp_thread_lock); | |
2547 | |
2548 /* Bookkeeping for the pool */ | |
2549 mutex_enter(&pool->p_thread_lock); | |
2550 pool->p_threads--; | |
2551 pool->p_reserved_threads--; | |
2552 pool->p_detached_threads++; | |
2553 mutex_exit(&pool->p_thread_lock); | |
2554 | |
2555 /* Release an rpcmod slot for this request */ | |
2556 (*RELE_PROC(xprt)) (clone_xprt->xp_wq, NULL); | |
2557 | |
2558 /* Mark the clone (thread) as detached */ | |
2559 clone_xprt->xp_reserved = FALSE; | |
2560 clone_xprt->xp_detached = TRUE; | |
2561 | |
2562 return (NULL); | |
2563 } | |
2564 | |
2565 /* | |
2566 * This routine is responsible for extracting RDMA plugin master XPRT, | |
2567 * unregister from the SVCPOOL and initiate plugin specific cleanup. | |
2568 * It is passed a list/group of rdma transports as records which are | |
2569 * active in a given registered or unregistered kRPC thread pool. Its shuts | |
2570 * all active rdma transports in that pool. If the thread active on the trasport | |
2571 * happens to be last thread for that pool, it will signal the creater thread | |
2572 * to cleanup the pool and destroy the xprt in svc_queueclose() | |
2573 */ | |
2574 void | |
8695
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2575 rdma_stop(rdma_xprt_group_t *rdma_xprts) |
0 | 2576 { |
2577 SVCMASTERXPRT *xprt; | |
2578 rdma_xprt_record_t *curr_rec; | |
2579 queue_t *q; | |
2580 mblk_t *mp; | |
8695
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2581 int i, rtg_count; |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2582 SVCPOOL *pool; |
0 | 2583 |
8695
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2584 if (rdma_xprts->rtg_count == 0) |
0 | 2585 return; |
2586 | |
8695
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2587 rtg_count = rdma_xprts->rtg_count; |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2588 |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2589 for (i = 0; i < rtg_count; i++) { |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2590 curr_rec = rdma_xprts->rtg_listhead; |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2591 rdma_xprts->rtg_listhead = curr_rec->rtr_next; |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2592 rdma_xprts->rtg_count--; |
0 | 2593 curr_rec->rtr_next = NULL; |
2594 xprt = curr_rec->rtr_xprt_ptr; | |
2595 q = xprt->xp_wq; | |
2596 svc_rdma_kstop(xprt); | |
2597 | |
2598 mutex_enter(&xprt->xp_req_lock); | |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2599 pool = xprt->xp_pool; |
0 | 2600 while ((mp = xprt->xp_req_head) != NULL) { |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2601 /* |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2602 * remove the request from the list and |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2603 * decrement p_reqs |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2604 */ |
0 | 2605 xprt->xp_req_head = mp->b_next; |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2606 mutex_enter(&pool->p_req_lock); |
0 | 2607 mp->b_next = (mblk_t *)0; |
4741
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2608 pool->p_reqs--; |
db206cc52130
4859528 svc_poll can loop forever not giving up the cpu
gt29601
parents:
1676
diff
changeset
|
2609 mutex_exit(&pool->p_req_lock); |
8695
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2610 if (mp) { |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2611 rdma_recv_data_t *rdp = (rdma_recv_data_t *) |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2612 mp->b_rptr; |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2613 RDMA_BUF_FREE(rdp->conn, &rdp->rpcmsg); |
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2614 RDMA_REL_CONN(rdp->conn); |
0 | 2615 freemsg(mp); |
8695
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2616 } |
0 | 2617 } |
2618 mutex_exit(&xprt->xp_req_lock); | |
2619 svc_queueclose(q); | |
2620 #ifdef DEBUG | |
2621 if (rdma_check) | |
2622 cmn_err(CE_NOTE, "rdma_stop: Exited svc_queueclose\n"); | |
2623 #endif | |
2624 /* | |
2625 * Free the rdma transport record for the expunged rdma | |
2626 * based master transport handle. | |
2627 */ | |
2628 kmem_free(curr_rec, sizeof (rdma_xprt_record_t)); | |
8695
115e6d42744b
6773181 panic due to assertion failure from ibmf_saa_impl_hca_detach()
Rajkumar Sivaprakasam <Rajkumar.Sivaprakasam@Sun.COM>
parents:
8139
diff
changeset
|
2629 if (!rdma_xprts->rtg_listhead) |
0 | 2630 break; |
2631 } | |
2632 } | |
10721
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2633 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2634 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2635 /* |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2636 * rpc_msg_dup/rpc_msg_free |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2637 * Currently only used by svc_rpcsec_gss.c but put in this file as it |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2638 * may be useful to others in the future. |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2639 * But future consumers should be careful cuz so far |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2640 * - only tested/used for call msgs (not reply) |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2641 * - only tested/used with call verf oa_length==0 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2642 */ |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2643 struct rpc_msg * |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2644 rpc_msg_dup(struct rpc_msg *src) |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2645 { |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2646 struct rpc_msg *dst; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2647 struct opaque_auth oa_src, oa_dst; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2648 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2649 dst = kmem_alloc(sizeof (*dst), KM_SLEEP); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2650 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2651 dst->rm_xid = src->rm_xid; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2652 dst->rm_direction = src->rm_direction; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2653 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2654 dst->rm_call.cb_rpcvers = src->rm_call.cb_rpcvers; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2655 dst->rm_call.cb_prog = src->rm_call.cb_prog; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2656 dst->rm_call.cb_vers = src->rm_call.cb_vers; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2657 dst->rm_call.cb_proc = src->rm_call.cb_proc; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2658 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2659 /* dup opaque auth call body cred */ |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2660 oa_src = src->rm_call.cb_cred; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2661 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2662 oa_dst.oa_flavor = oa_src.oa_flavor; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2663 oa_dst.oa_base = kmem_alloc(oa_src.oa_length, KM_SLEEP); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2664 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2665 bcopy(oa_src.oa_base, oa_dst.oa_base, oa_src.oa_length); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2666 oa_dst.oa_length = oa_src.oa_length; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2667 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2668 dst->rm_call.cb_cred = oa_dst; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2669 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2670 /* dup or just alloc opaque auth call body verifier */ |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2671 if (src->rm_call.cb_verf.oa_length > 0) { |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2672 oa_src = src->rm_call.cb_verf; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2673 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2674 oa_dst.oa_flavor = oa_src.oa_flavor; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2675 oa_dst.oa_base = kmem_alloc(oa_src.oa_length, KM_SLEEP); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2676 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2677 bcopy(oa_src.oa_base, oa_dst.oa_base, oa_src.oa_length); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2678 oa_dst.oa_length = oa_src.oa_length; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2679 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2680 dst->rm_call.cb_verf = oa_dst; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2681 } else { |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2682 oa_dst.oa_flavor = -1; /* will be set later */ |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2683 oa_dst.oa_base = kmem_alloc(MAX_AUTH_BYTES, KM_SLEEP); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2684 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2685 oa_dst.oa_length = 0; /* will be set later */ |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2686 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2687 dst->rm_call.cb_verf = oa_dst; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2688 } |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2689 return (dst); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2690 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2691 error: |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2692 kmem_free(dst->rm_call.cb_cred.oa_base, dst->rm_call.cb_cred.oa_length); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2693 kmem_free(dst, sizeof (*dst)); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2694 return (NULL); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2695 } |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2696 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2697 void |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2698 rpc_msg_free(struct rpc_msg **msg, int cb_verf_oa_length) |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2699 { |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2700 struct rpc_msg *m = *msg; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2701 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2702 kmem_free(m->rm_call.cb_cred.oa_base, m->rm_call.cb_cred.oa_length); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2703 m->rm_call.cb_cred.oa_base = NULL; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2704 m->rm_call.cb_cred.oa_length = 0; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2705 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2706 kmem_free(m->rm_call.cb_verf.oa_base, cb_verf_oa_length); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2707 m->rm_call.cb_verf.oa_base = NULL; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2708 m->rm_call.cb_verf.oa_length = 0; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2709 |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2710 kmem_free(m, sizeof (*m)); |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2711 m = NULL; |
2a4f0c5ca772
6791302 RPCSEC_GSS svc should be able to handle a misbehaving client
Glenn Barry <Glenn.Barry@Sun.COM>
parents:
8778
diff
changeset
|
2712 } |