Thanks for having a look. I found a comment on stackexchange that -Xlinker --whole-archive would prevent the compiler from dropping
apparently unused symbols, and this converted the missing-symbol message to a core-dump on python load. Not a big improvement, and nm -a still shows the same symbol (and a bunch of others, actually) to be undefined in the target, when they ARE defined in the libraries i am linking to. Moreover, the man page for g++ says nothing about a "--whole-archive" option for Xlinker. Still mysterious.

Hello, Daniweb --
I have a C++ library, for which I am trying to make a Python interface. I have come across a linking issue which I
haven't been able to solve. The library has a series of header and source files, and I am looking for a way to
add the Python interface without modifying any of the source files, by separately compiling linking in the new Boost Python sources,
rather than adding the additional Boost code directly in the .cpp files.

I have worked out a simple test case for what I am trying to do. There is a simple class called "World", a source file that
implements some of the members, and a source file for the wrapper code. In addition, there is a makefile with two targets, one that
compiles the files in a single translation unit, which works fine, and one that compiles the wrapper and the source as separate
translation units, and then links them, which does not work. There are no compiling or link messages, but on import, Python
reports an undefined symbol.

Here is the header --

#ifndef IG________________WORLD_H______________________

#define IG________________WORLD_H______________________
#include <string>

struct World
  World(std::string msg);// : msg(msg) {} // added constructor
  void set(std::string msg);//{ this->msg = msg; }
  std::string greet(); // { return msg; }
  std::string msg;

#endif // IG________________WORLD_H______________________

Here is the source file

#include <boost/python.hpp>
#include <boost/python/module.hpp>
#include <boost/python/def.hpp>
#include "World.h"
using namespace boost::python;

World::World(std::string msg) 
  : msg(msg) 
{} // added constructor

std::string World::greet() 
  return ...

Hi, I am writing a library that does some heavy computation, and so I have been trying to speed things up with some low level parallelism. My machine runs Ubuntu 16.04, and is an old i7, but has four virtual cores.
I tried to use openMP, and while it runs error free, and appears to be creating the threads, I don't ever see any speedup. I cut my code down drastically, to this test code:

#include "timer.h"
#include <iostream>
#include <math.h>
void testPlusEQ(double* summand1,double* summand2,unsigned int size_)
{// omp simd
  for (unsigned int i=0;i<size_;i++) { summand1[i] += summand2[i]; }  //sqrt(summand1[i]/summand2[i]); 

void testPlusEQ_OMP(double* summand1,double* summand2,unsigned int size_)
{// omp simd
#pragma omp simd
  for (unsigned int i=0;i<size_;i++) { summand1[i] += summand2[i];  } //sqrt(summand1[i]/summand2[i]);

int main()
  unsigned int size(10000000);
  double* x(new double[size]), *y(new double[size]);
  for(unsigned int i=0; i < size; ++i) { x[i] = 1 + ((double)i)/((double)size); y[i] = 2 + ((double)i)/((double)size); } 
  uint64_t t1(MathLib::GetTimeStamp()); 
  uint64_t t2(MathLib::GetTimeStamp()); 
  uint64_t t3(MathLib::GetTimeStamp()); 
  std::cout << " ST " << t2 - t1  << " OMP " << t3 - t2 << " OMP - ST " << ((double)(t3 - t2)) - ((double)(t2 - t1)) << " OMP/ST " << ((double) (t3 - t2))/((double) (t2 - t1)) << "\n";
  return 1;

The timer is this code, btw:

// returns time in microseconds
uint64_t MathLib::GetTimeStamp() {
  struct timeval tv;
  return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;

int timeval_subtract (timeval* result,timeval*  x,timeval*  y)
//  struct  *result, *x, *y;
  /* Perform the carry for the later ...