Files
libreoffice/cppu/source/threadpool/threadpool.cxx

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

476 lines
14 KiB
C++
Raw Normal View History

/* -*- Mode: C++; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- */
2012-06-13 14:17:57 +01:00
/*
* This file is part of the LibreOffice project.
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/.
*
* This file incorporates work covered by the following license notice:
*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed
* with this work for additional information regarding copyright
* ownership. The ASF licenses this file to you under the Apache
* License, Version 2.0 (the "License"); you may not use this file
* except in compliance with the License. You may obtain a copy of
* the License at http://www.apache.org/licenses/LICENSE-2.0 .
*/
#include <sal/config.h>
#include <cassert>
#include <chrono>
#include <algorithm>
#include <utility>
#include <unordered_map>
2000-09-18 14:29:57 +00:00
#include <osl/diagnose.h>
#include <sal/log.hxx>
2000-09-18 14:29:57 +00:00
#include <uno/threadpool.h>
#include "threadpool.hxx"
#include "thread.hxx"
using namespace ::osl;
using namespace ::rtl;
2000-09-18 14:29:57 +00:00
namespace cppu_threadpool
{
WaitingThread::WaitingThread(
rtl::Reference<ORequestThread> theThread): thread(std::move(theThread))
{}
DisposedCallerAdminHolder const & DisposedCallerAdmin::getInstance()
2010-10-11 10:41:50 +01:00
{
static DisposedCallerAdminHolder theDisposedCallerAdmin = std::make_shared<DisposedCallerAdmin>();
return theDisposedCallerAdmin;
}
2000-09-18 14:29:57 +00:00
DisposedCallerAdmin::~DisposedCallerAdmin()
{
SAL_WARN_IF( !m_vector.empty(), "cppu.threadpool", "DisposedCallerList : " << m_vector.size() << " left");
2000-09-18 14:29:57 +00:00
}
void DisposedCallerAdmin::dispose( void const * nDisposeId )
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard( m_mutex );
m_vector.push_back( nDisposeId );
2000-09-18 14:29:57 +00:00
}
void DisposedCallerAdmin::destroy( void const * nDisposeId )
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard( m_mutex );
m_vector.erase(std::remove(m_vector.begin(), m_vector.end(), nDisposeId), m_vector.end());
2000-09-18 14:29:57 +00:00
}
bool DisposedCallerAdmin::isDisposed( void const * nDisposeId )
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard( m_mutex );
return (std::find(m_vector.begin(), m_vector.end(), nDisposeId) != m_vector.end());
2000-09-18 14:29:57 +00:00
}
ThreadPool::ThreadPool() :
m_DisposedCallerAdmin( DisposedCallerAdmin::getInstance() )
2010-10-11 10:41:50 +01:00
{
}
2000-09-18 14:29:57 +00:00
ThreadPool::~ThreadPool()
{
SAL_WARN_IF( m_mapQueue.size(), "cppu.threadpool", "ThreadIdHashMap: " << m_mapQueue.size() << " left");
2000-09-18 14:29:57 +00:00
}
void ThreadPool::dispose( void const * nDisposeId )
2000-09-18 14:29:57 +00:00
{
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
m_DisposedCallerAdmin->dispose( nDisposeId );
std::scoped_lock guard( m_mutex );
for (auto const& item : m_mapQueue)
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
{
if( item.second.first )
2000-09-18 14:29:57 +00:00
{
item.second.first->dispose( nDisposeId );
2000-09-18 14:29:57 +00:00
}
if( item.second.second )
2000-09-18 14:29:57 +00:00
{
item.second.second->dispose( nDisposeId );
2000-09-18 14:29:57 +00:00
}
}
}
void ThreadPool::destroy( void const * nDisposeId )
2000-09-18 14:29:57 +00:00
{
m_DisposedCallerAdmin->destroy( nDisposeId );
2000-09-18 14:29:57 +00:00
}
/******************
* This methods lets the thread wait a certain amount of time. If within this timespan
* a new request comes in, this thread is reused. This is done only to improve performance,
* it is not required for threadpool functionality.
******************/
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
void ThreadPool::waitInPool( rtl::Reference< ORequestThread > const & pThread )
2000-09-18 14:29:57 +00:00
{
WaitingThread waitingThread(pThread);
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard( m_mutexWaitingThreadList );
m_dequeThreads.push_front( &waitingThread );
2000-09-18 14:29:57 +00:00
}
// let the thread wait 2 seconds
waitingThread.condition.wait( std::chrono::seconds(2) );
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard ( m_mutexWaitingThreadList );
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
if( waitingThread.thread.is() )
2000-09-18 14:29:57 +00:00
{
// thread wasn't reused, remove it from the list
WaitingThreadDeque::iterator ii = find(
m_dequeThreads.begin(), m_dequeThreads.end(), &waitingThread );
OSL_ASSERT( ii != m_dequeThreads.end() );
m_dequeThreads.erase( ii );
2000-09-18 14:29:57 +00:00
}
}
}
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
void ThreadPool::joinWorkers()
{
{
std::scoped_lock guard( m_mutexWaitingThreadList );
for (auto const& thread : m_dequeThreads)
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
{
// wake the threads up
thread->condition.set();
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
}
}
m_aThreadAdmin.join();
}
bool ThreadPool::createThread( JobQueue *pQueue ,
2000-09-18 14:29:57 +00:00
const ByteSequence &aThreadId,
bool bAsynchron )
2000-09-18 14:29:57 +00:00
{
{
// Can a thread be reused ?
std::scoped_lock guard( m_mutexWaitingThreadList );
if( ! m_dequeThreads.empty() )
2000-09-18 14:29:57 +00:00
{
// inform the thread and let it go
struct WaitingThread *pWaitingThread = m_dequeThreads.back();
2000-09-18 14:29:57 +00:00
pWaitingThread->thread->setTask( pQueue , aThreadId , bAsynchron );
pWaitingThread->thread = nullptr;
2000-09-18 14:29:57 +00:00
// remove from list
m_dequeThreads.pop_back();
2000-09-18 14:29:57 +00:00
// let the thread go
pWaitingThread->condition.set();
return true;
2000-09-18 14:29:57 +00:00
}
}
rtl::Reference pThread(
new ORequestThread( this, pQueue , aThreadId, bAsynchron) );
return pThread->launch();
2000-09-18 14:29:57 +00:00
}
bool ThreadPool::revokeQueue( const ByteSequence &aThreadId, bool bAsynchron )
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard( m_mutex );
2000-09-18 14:29:57 +00:00
ThreadIdHashMap::iterator ii = m_mapQueue.find( aThreadId );
OSL_ASSERT( ii != m_mapQueue.end() );
2000-09-18 14:29:57 +00:00
if( bAsynchron )
{
if( ! (*ii).second.second->isEmpty() )
{
// another thread has put something into the queue
return false;
2000-09-18 14:29:57 +00:00
}
(*ii).second.second = nullptr;
2000-09-18 14:29:57 +00:00
if( (*ii).second.first )
{
// all oneway request have been processed, now
// synchronous requests may go on
2000-09-18 14:29:57 +00:00
(*ii).second.first->resume();
}
}
else
{
if( ! (*ii).second.first->isEmpty() )
{
// another thread has put something into the queue
return false;
2000-09-18 14:29:57 +00:00
}
(*ii).second.first = nullptr;
2000-09-18 14:29:57 +00:00
}
if( nullptr == (*ii).second.first && nullptr == (*ii).second.second )
2000-09-18 14:29:57 +00:00
{
m_mapQueue.erase( ii );
}
return true;
2000-09-18 14:29:57 +00:00
}
bool ThreadPool::addJob(
2000-09-18 14:29:57 +00:00
const ByteSequence &aThreadId ,
bool bAsynchron,
2000-09-18 14:29:57 +00:00
void *pThreadSpecificData,
Handle uno_threadpool_dispose in parallel with uno_threadpool_putJob While tracking down the issue discussed in the commit message of 78dc7d982b65c1843a288b80da83f8766e85d0cf "Remove a potentially already enqueued response when a bridge is disposed", it occurred to me that there should be a race in those uno_threadpool_putJob( bridge_->getThreadPool(), ...); calls in binaryurp/source/reader.cxx, when the bridge gets disposed (through some other thread) between the time the bridge_->getThreadPool() call checks for the bridge being disposed (in which case it would throw a DisposedException) and the actual uno_threadpool_putJob call. I tried to catch that with a previous incarnation of this change (<https://gerrit.libreoffice.org/c/core/+/96120/1> "Jenkins Slides Through the Tiny Window"), but couldn't---presumably because this race would be very rare after all, and the issue I was chasing turned out to be caused by something different anyway. Nevertheless, I wanted to address this potential race now. We can only reliably check for disposed'ness after having locked ThreadPool's m_mutex in uno_threadpool_putJob -> ThreadPool::addJob, but at which time we can no longer indicate this condition to the caller---uno_threapool_putJob is part of the stable URE interface, has a void return type, and should not throw any exceptions as it is a C function. However, if the bridge gets disposed, any threads that would wait for this job (in cppu_threadpool::JobQueue::enter, either from cppu_threadpool::ORequestThread::run waiting to process new incoming calls, or from a bridge's call to uno_threadpool_enter waiting for a respose to an outgoing call) should already learn about the bridge being disposed by falling out of cppu_threadpool::JobQueue::enter with a null return value. So it should be OK if uno_threadpool_putJob silently discards the job in that case. Change-Id: I36fe996436f55a93d84d66cc0b164e2e45a37e81 Reviewed-on: https://gerrit.libreoffice.org/c/core/+/96120 Tested-by: Jenkins Reviewed-by: Stephan Bergmann <sbergman@redhat.com>
2020-06-26 07:58:40 +02:00
RequestFun * doRequest,
void const * disposeId )
2000-09-18 14:29:57 +00:00
{
bool bCreateThread = false;
JobQueue *pQueue = nullptr;
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard( m_mutex );
Handle uno_threadpool_dispose in parallel with uno_threadpool_putJob While tracking down the issue discussed in the commit message of 78dc7d982b65c1843a288b80da83f8766e85d0cf "Remove a potentially already enqueued response when a bridge is disposed", it occurred to me that there should be a race in those uno_threadpool_putJob( bridge_->getThreadPool(), ...); calls in binaryurp/source/reader.cxx, when the bridge gets disposed (through some other thread) between the time the bridge_->getThreadPool() call checks for the bridge being disposed (in which case it would throw a DisposedException) and the actual uno_threadpool_putJob call. I tried to catch that with a previous incarnation of this change (<https://gerrit.libreoffice.org/c/core/+/96120/1> "Jenkins Slides Through the Tiny Window"), but couldn't---presumably because this race would be very rare after all, and the issue I was chasing turned out to be caused by something different anyway. Nevertheless, I wanted to address this potential race now. We can only reliably check for disposed'ness after having locked ThreadPool's m_mutex in uno_threadpool_putJob -> ThreadPool::addJob, but at which time we can no longer indicate this condition to the caller---uno_threapool_putJob is part of the stable URE interface, has a void return type, and should not throw any exceptions as it is a C function. However, if the bridge gets disposed, any threads that would wait for this job (in cppu_threadpool::JobQueue::enter, either from cppu_threadpool::ORequestThread::run waiting to process new incoming calls, or from a bridge's call to uno_threadpool_enter waiting for a respose to an outgoing call) should already learn about the bridge being disposed by falling out of cppu_threadpool::JobQueue::enter with a null return value. So it should be OK if uno_threadpool_putJob silently discards the job in that case. Change-Id: I36fe996436f55a93d84d66cc0b164e2e45a37e81 Reviewed-on: https://gerrit.libreoffice.org/c/core/+/96120 Tested-by: Jenkins Reviewed-by: Stephan Bergmann <sbergman@redhat.com>
2020-06-26 07:58:40 +02:00
if (m_DisposedCallerAdmin->isDisposed(disposeId)) {
return true;
}
2000-09-18 14:29:57 +00:00
ThreadIdHashMap::iterator ii = m_mapQueue.find( aThreadId );
if( ii == m_mapQueue.end() )
{
m_mapQueue[ aThreadId ] = std::pair < JobQueue * , JobQueue * > ( nullptr , nullptr );
2000-09-18 14:29:57 +00:00
ii = m_mapQueue.find( aThreadId );
OSL_ASSERT( ii != m_mapQueue.end() );
2000-09-18 14:29:57 +00:00
}
if( bAsynchron )
{
if( ! (*ii).second.second )
{
(*ii).second.second = new JobQueue();
bCreateThread = true;
2000-09-18 14:29:57 +00:00
}
pQueue = (*ii).second.second;
}
else
{
if( ! (*ii).second.first )
{
(*ii).second.first = new JobQueue();
bCreateThread = true;
2000-09-18 14:29:57 +00:00
}
pQueue = (*ii).second.first;
if( (*ii).second.second && ( (*ii).second.second->isBusy() ) )
{
pQueue->suspend();
}
}
pQueue->add( pThreadSpecificData , doRequest );
}
return !bCreateThread || createThread( pQueue , aThreadId , bAsynchron);
2000-09-18 14:29:57 +00:00
}
void ThreadPool::prepare( const ByteSequence &aThreadId )
{
std::scoped_lock guard( m_mutex );
2000-09-18 14:29:57 +00:00
ThreadIdHashMap::iterator ii = m_mapQueue.find( aThreadId );
if( ii == m_mapQueue.end() )
{
JobQueue *p = new JobQueue();
m_mapQueue[ aThreadId ] = std::pair< JobQueue * , JobQueue * > ( p , nullptr );
2000-09-18 14:29:57 +00:00
}
else if( nullptr == (*ii).second.first )
2000-09-18 14:29:57 +00:00
{
(*ii).second.first = new JobQueue();
2000-09-18 14:29:57 +00:00
}
}
void * ThreadPool::enter( const ByteSequence & aThreadId , void const * nDisposeId )
2000-09-18 14:29:57 +00:00
{
JobQueue *pQueue = nullptr;
2000-09-18 14:29:57 +00:00
{
std::scoped_lock guard( m_mutex );
2000-09-18 14:29:57 +00:00
ThreadIdHashMap::iterator ii = m_mapQueue.find( aThreadId );
OSL_ASSERT( ii != m_mapQueue.end() );
2000-09-18 14:29:57 +00:00
pQueue = (*ii).second.first;
}
OSL_ASSERT( pQueue );
void *pReturn = pQueue->enter( nDisposeId );
2000-09-18 14:29:57 +00:00
if( pQueue->isCallstackEmpty() )
{
if( revokeQueue( aThreadId , false) )
2000-09-18 14:29:57 +00:00
{
// remove queue
delete pQueue;
}
}
return pReturn;
}
}
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
// All uno_ThreadPool handles in g_pThreadpoolHashSet with overlapping life
// spans share one ThreadPool instance. When g_pThreadpoolHashSet becomes empty
// (within the last uno_threadpool_destroy) all worker threads spawned by that
// ThreadPool instance are joined (which implies that uno_threadpool_destroy
// must never be called from a worker thread); afterwards, the next call to
// uno_threadpool_create (if any) will lead to a new ThreadPool instance.
2000-09-18 14:29:57 +00:00
using namespace cppu_threadpool;
Extend loplugin:external to warn about classes ...following up on 314f15bff08b76bf96acf99141776ef64d2f1355 "Extend loplugin:external to warn about enums". Cases where free functions were moved into an unnamed namespace along with a class, to not break ADL, are in: filter/source/svg/svgexport.cxx sc/source/filter/excel/xelink.cxx sc/source/filter/excel/xilink.cxx svx/source/sdr/contact/viewobjectcontactofunocontrol.cxx All other free functions mentioning moved classes appear to be harmless and not give rise to (silent, even) ADL breakage. (One remaining TODO in compilerplugins/clang/external.cxx is that derived classes are not covered by computeAffectedTypes, even though they could also be affected by ADL-breakage--- but don't seem to be in any acutal case across the code base.) For friend declarations using elaborate type specifiers, like class C1 {}; class C2 { friend class C1; }; * If C2 (but not C1) is moved into an unnamed namespace, the friend declaration must be changed to not use an elaborate type specifier (i.e., "friend C1;"; see C++17 [namespace.memdef]/3: "If the name in a friend declaration is neither qualified nor a template-id and the declaration is a function or an elaborated-type-specifier, the lookup to determine whether the entity has been previously declared shall not consider any scopes outside the innermost enclosing namespace.") * If C1 (but not C2) is moved into an unnamed namespace, the friend declaration must be changed too, see <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71882> "elaborated-type-specifier friend not looked up in unnamed namespace". Apart from that, to keep changes simple and mostly mechanical (which should help avoid regressions), out-of-line definitions of class members have been left in the enclosing (named) namespace. But explicit specializations of class templates had to be moved into the unnamed namespace to appease <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92598> "explicit specialization of template from unnamed namespace using unqualified-id in enclosing namespace". Also, accompanying declarations (of e.g. typedefs or static variables) that could arguably be moved into the unnamed namespace too have been left alone. And in some cases, mention of affected types in blacklists in other loplugins needed to be adapted. And sc/qa/unit/mark_test.cxx uses a hack of including other .cxx, one of which is sc/source/core/data/segmenttree.cxx where e.g. ScFlatUInt16SegmentsImpl is not moved into an unnamed namespace (because it is declared in sc/inc/segmenttree.hxx), but its base ScFlatSegmentsImpl is. GCC warns about such combinations with enabled-by-default -Wsubobject-linkage, but "The compiler doesn’t give this warning for types defined in the main .C file, as those are unlikely to have multiple definitions." (<https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Warning-Options.html>) The warned-about classes also don't have multiple definitions in the given test, so disable the warning when including the .cxx. Change-Id: Ib694094c0d8168be68f8fe90dfd0acbb66a3f1e4 Reviewed-on: https://gerrit.libreoffice.org/83239 Tested-by: Jenkins Reviewed-by: Stephan Bergmann <sbergman@redhat.com>
2019-11-19 16:32:49 +01:00
namespace {
struct uno_ThreadPool_Equal
{
bool operator () ( const uno_ThreadPool &a , const uno_ThreadPool &b ) const
{
return a == b;
}
};
struct uno_ThreadPool_Hash
{
std::size_t operator () ( const uno_ThreadPool &a ) const
{
return reinterpret_cast<std::size_t>( a );
}
};
Extend loplugin:external to warn about classes ...following up on 314f15bff08b76bf96acf99141776ef64d2f1355 "Extend loplugin:external to warn about enums". Cases where free functions were moved into an unnamed namespace along with a class, to not break ADL, are in: filter/source/svg/svgexport.cxx sc/source/filter/excel/xelink.cxx sc/source/filter/excel/xilink.cxx svx/source/sdr/contact/viewobjectcontactofunocontrol.cxx All other free functions mentioning moved classes appear to be harmless and not give rise to (silent, even) ADL breakage. (One remaining TODO in compilerplugins/clang/external.cxx is that derived classes are not covered by computeAffectedTypes, even though they could also be affected by ADL-breakage--- but don't seem to be in any acutal case across the code base.) For friend declarations using elaborate type specifiers, like class C1 {}; class C2 { friend class C1; }; * If C2 (but not C1) is moved into an unnamed namespace, the friend declaration must be changed to not use an elaborate type specifier (i.e., "friend C1;"; see C++17 [namespace.memdef]/3: "If the name in a friend declaration is neither qualified nor a template-id and the declaration is a function or an elaborated-type-specifier, the lookup to determine whether the entity has been previously declared shall not consider any scopes outside the innermost enclosing namespace.") * If C1 (but not C2) is moved into an unnamed namespace, the friend declaration must be changed too, see <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71882> "elaborated-type-specifier friend not looked up in unnamed namespace". Apart from that, to keep changes simple and mostly mechanical (which should help avoid regressions), out-of-line definitions of class members have been left in the enclosing (named) namespace. But explicit specializations of class templates had to be moved into the unnamed namespace to appease <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92598> "explicit specialization of template from unnamed namespace using unqualified-id in enclosing namespace". Also, accompanying declarations (of e.g. typedefs or static variables) that could arguably be moved into the unnamed namespace too have been left alone. And in some cases, mention of affected types in blacklists in other loplugins needed to be adapted. And sc/qa/unit/mark_test.cxx uses a hack of including other .cxx, one of which is sc/source/core/data/segmenttree.cxx where e.g. ScFlatUInt16SegmentsImpl is not moved into an unnamed namespace (because it is declared in sc/inc/segmenttree.hxx), but its base ScFlatSegmentsImpl is. GCC warns about such combinations with enabled-by-default -Wsubobject-linkage, but "The compiler doesn’t give this warning for types defined in the main .C file, as those are unlikely to have multiple definitions." (<https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Warning-Options.html>) The warned-about classes also don't have multiple definitions in the given test, so disable the warning when including the .cxx. Change-Id: Ib694094c0d8168be68f8fe90dfd0acbb66a3f1e4 Reviewed-on: https://gerrit.libreoffice.org/83239 Tested-by: Jenkins Reviewed-by: Stephan Bergmann <sbergman@redhat.com>
2019-11-19 16:32:49 +01:00
}
typedef std::unordered_map< uno_ThreadPool, ThreadPoolHolder, uno_ThreadPool_Hash, uno_ThreadPool_Equal > ThreadpoolHashSet;
static ThreadpoolHashSet *g_pThreadpoolHashSet;
struct _uno_ThreadPool
{
sal_Int32 dummy;
};
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
namespace {
ThreadPoolHolder getThreadPool( uno_ThreadPool hPool )
{
MutexGuard guard( Mutex::getGlobalMutex() );
assert( g_pThreadpoolHashSet != nullptr );
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
ThreadpoolHashSet::iterator i( g_pThreadpoolHashSet->find(hPool) );
assert( i != g_pThreadpoolHashSet->end() );
return i->second;
}
}
extern "C" uno_ThreadPool SAL_CALL
uno_threadpool_create() SAL_THROW_EXTERN_C()
2000-09-18 14:29:57 +00:00
{
MutexGuard guard( Mutex::getGlobalMutex() );
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
ThreadPoolHolder p;
if( ! g_pThreadpoolHashSet )
{
g_pThreadpoolHashSet = new ThreadpoolHashSet;
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
p = new ThreadPool;
}
else
{
assert( !g_pThreadpoolHashSet->empty() );
p = g_pThreadpoolHashSet->begin()->second;
}
// Just ensure that the handle is unique in the process (via heap)
uno_ThreadPool h = new struct _uno_ThreadPool;
g_pThreadpoolHashSet->emplace( h, p );
return h;
2000-09-18 14:29:57 +00:00
}
extern "C" void SAL_CALL
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
uno_threadpool_attach( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
2000-09-18 14:29:57 +00:00
{
sal_Sequence *pThreadId = nullptr;
uno_getIdOfCurrentThread( &pThreadId );
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
getThreadPool( hPool )->prepare( pThreadId );
rtl_byte_sequence_release( pThreadId );
uno_releaseIdFromCurrentThread();
2000-09-18 14:29:57 +00:00
}
extern "C" void SAL_CALL
uno_threadpool_enter( uno_ThreadPool hPool , void **ppJob )
2001-03-09 11:10:57 +00:00
SAL_THROW_EXTERN_C()
2000-09-18 14:29:57 +00:00
{
sal_Sequence *pThreadId = nullptr;
2000-09-18 14:29:57 +00:00
uno_getIdOfCurrentThread( &pThreadId );
*ppJob =
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
getThreadPool( hPool )->enter(
pThreadId,
hPool );
2000-09-18 14:29:57 +00:00
rtl_byte_sequence_release( pThreadId );
uno_releaseIdFromCurrentThread();
2000-09-18 14:29:57 +00:00
}
extern "C" void SAL_CALL
uno_threadpool_detach(SAL_UNUSED_PARAMETER uno_ThreadPool) SAL_THROW_EXTERN_C()
2000-09-18 14:29:57 +00:00
{
// we might do here some tidying up in case a thread called attach but never detach
2000-09-18 14:29:57 +00:00
}
extern "C" void SAL_CALL
uno_threadpool_putJob(
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
uno_ThreadPool hPool,
sal_Sequence *pThreadId,
void *pJob,
void ( SAL_CALL * doRequest ) ( void *pThreadSpecificData ),
sal_Bool bIsOneway ) SAL_THROW_EXTERN_C()
{
Handle uno_threadpool_dispose in parallel with uno_threadpool_putJob While tracking down the issue discussed in the commit message of 78dc7d982b65c1843a288b80da83f8766e85d0cf "Remove a potentially already enqueued response when a bridge is disposed", it occurred to me that there should be a race in those uno_threadpool_putJob( bridge_->getThreadPool(), ...); calls in binaryurp/source/reader.cxx, when the bridge gets disposed (through some other thread) between the time the bridge_->getThreadPool() call checks for the bridge being disposed (in which case it would throw a DisposedException) and the actual uno_threadpool_putJob call. I tried to catch that with a previous incarnation of this change (<https://gerrit.libreoffice.org/c/core/+/96120/1> "Jenkins Slides Through the Tiny Window"), but couldn't---presumably because this race would be very rare after all, and the issue I was chasing turned out to be caused by something different anyway. Nevertheless, I wanted to address this potential race now. We can only reliably check for disposed'ness after having locked ThreadPool's m_mutex in uno_threadpool_putJob -> ThreadPool::addJob, but at which time we can no longer indicate this condition to the caller---uno_threapool_putJob is part of the stable URE interface, has a void return type, and should not throw any exceptions as it is a C function. However, if the bridge gets disposed, any threads that would wait for this job (in cppu_threadpool::JobQueue::enter, either from cppu_threadpool::ORequestThread::run waiting to process new incoming calls, or from a bridge's call to uno_threadpool_enter waiting for a respose to an outgoing call) should already learn about the bridge being disposed by falling out of cppu_threadpool::JobQueue::enter with a null return value. So it should be OK if uno_threadpool_putJob silently discards the job in that case. Change-Id: I36fe996436f55a93d84d66cc0b164e2e45a37e81 Reviewed-on: https://gerrit.libreoffice.org/c/core/+/96120 Tested-by: Jenkins Reviewed-by: Stephan Bergmann <sbergman@redhat.com>
2020-06-26 07:58:40 +02:00
if (!getThreadPool(hPool)->addJob( pThreadId, bIsOneway, pJob ,doRequest, hPool ))
{
SAL_WARN(
"cppu.threadpool",
"uno_threadpool_putJob in parallel with uno_threadpool_destroy");
}
}
2000-09-18 14:29:57 +00:00
extern "C" void SAL_CALL
uno_threadpool_dispose( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
2000-09-18 14:29:57 +00:00
{
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
getThreadPool(hPool)->dispose(
hPool );
2000-09-18 14:29:57 +00:00
}
extern "C" void SAL_CALL
uno_threadpool_destroy( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
2000-09-18 14:29:57 +00:00
{
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
ThreadPoolHolder p( getThreadPool(hPool) );
p->destroy(
hPool );
bool empty;
{
OSL_ASSERT( g_pThreadpoolHashSet );
MutexGuard guard( Mutex::getGlobalMutex() );
ThreadpoolHashSet::iterator ii = g_pThreadpoolHashSet->find( hPool );
OSL_ASSERT( ii != g_pThreadpoolHashSet->end() );
g_pThreadpoolHashSet->erase( ii );
delete hPool;
empty = g_pThreadpoolHashSet->empty();
if( empty )
{
delete g_pThreadpoolHashSet;
g_pThreadpoolHashSet = nullptr;
}
}
if( empty )
{
Better fix for ThreadPool/ORequestThread life cycle This is a follow up to d015384e1d98fe77fd59339044f58efb1ab9fb25 "Fixed ThreadPool (and dependent ORequestThread) life cycle" that still had some problems: * First, if Bridge::terminate was first entered from the reader or writer thread, it would not join on that thread, so that thread could still be running during exit. That has been addressed by giving Bridge::dispose new semantics: It waits until both Bridge::terminate has completed (even if that was called from a different thread) and all spawned threads (reader, writer, ORequestThread workers) have been joined. (This implies that Bridge::dispose must not be called from such a thread, to avoid deadlock.) * Second, if Bridge::terminate was first entered from an ORequestThread, the call to uno_threadpool_dispose(0) to join on all such worker threads could deadlock. That has been addressed by making the last call to uno_threadpool_destroy wait to join on all worker threads, and by calling uno_threadpool_destroy only from the final Bridge::terminate (from Bridge::dispose), to avoid deadlock. (The special semantics of uno_threadpool_dispose(0) are no longer needed and have been removed, as they conflicted with the fix for the third problem below.) * Third, once uno_threadpool_destroy had called uno_threadpool_dispose(0), the ThreadAdmin singleton had been disposed, so no new remote bridges could successfully be created afterwards. That has been addressed by making ThreadAdmin a member of ThreadPool, and making (only) those uno_ThreadPool handles with overlapping life spans share one ThreadPool instance (which thus is no longer a singleton, either). Additionally, ORequestThread has been made more robust (in the style of salhelper::Thread) to avoid races. Change-Id: I2cbd1b3f9aecc1bf4649e482d2c22b33b471788f
2012-05-23 09:42:37 +02:00
p->joinWorkers();
}
2000-09-18 14:29:57 +00:00
}
/* vim:set shiftwidth=4 softtabstop=4 expandtab: */