Discussion:
[Swig-devel] Radical new approach to development and moving towards version 3.1 or version 4.0
William S Fulton
2017-02-10 18:49:58 UTC
Permalink
I would like developers to focus on releasing swig-3.1 in the near future
and propose some radical changes for this release and future development.

Usually we try not to break backwards compatibility in our point releases,
3.0.x. Version 3.1 is an opportunity to clear out some old cruft and drop
support for older target languages. Some riskier changes I've resisted
pulling into 3.0 can also be merged into 3.1, most notably the doxygen work.

There is a wiki page at
https://github.com/swig/swig/wiki/SWIG-3.1-Development containing the
aspirations of what should go into 3.1. I suggest we keep this up to date
as progress is made.

I have also sensed some frustration that some of the half finished work
never makes it into the mainline. I've resisted pulling in some of these
branches as the quality is sub-standard and not something that we can
support given how few developers there are. The reality is some of the
target languages are completely sub-standard too, yet they already exist in
the released versions and are not removed. I wonder if we should take a new
approach going forwards and propose a single code base but classify target
languages by one of two qualities.

1) First-class
These would be the target languages that are recognised as good quality and
fully functional. The classification needs to be backed up by a test-suite
that works and passes in Travis. Not all features will necessarily be
available though, eg director or nspace support might be missing.

2) Sub-standard
Any language not meeting the good quality label would fall into this
bracket.

I feel that it must be clear that a sub-standard language module is not up
to the expected quality and anyone choosing to use it should not be allowed
to file bug reports unless accompanied by patches to fix. This way
expectations are set and these language modules are made available 'as is'
and encouragement is made to help improve them if anyone wants them.

To this end, I propose any sub-standard module will require an additional
SWIG flag to make this clear. Something like:
-sub-standard-module-which-can-only-be-used-if-I-help-fix-problems. This
will also issue a wordy warning explaining exactly what this means to set
expectations. The flag should also how make it clear how to get involved in
order to move it into the first-class category.

This way we don't compromise on the quality of what we currently have and
we make neglected code more easily available and hopefuly encourage new
developer participation. Thoughts?

I would like to start the 3.1 branch by merging in the doxygen branch.
Vadim, any reason not to do this now? Going forwards, I'll merge master
regularly to the 3.1 branch and suggest we have Travis testing on both
master and the 3.1 branch. Unless there is a lot of developer participation
on the 3.1 branch, I expect it will take a few months to get ready in which
case we may need another one or two 3.0.x releases.

If we do this, we could emphasise the change in approach by calling it
version 4.0 instead.

William
William S Fulton
2017-02-11 12:51:57 UTC
Permalink
+swig-devel list

---------- Forwarded message ----------
From: <***@comcast.net>
Date: 10 February 2017 at 20:38
Subject: [Swig-devel] Radical new approach to development and moving
towards version 3.1 or version 4.0
[...]
Post by William S Fulton
To this end, I propose any sub-standard module will require an
-sub-standard-module-which-can-only-be-used-if-I-help-fix-problems.
My suggestion would be '-experimental'. So a feature/language would
be either experimental or not experimental (standard, regular, fully
supported, etc).

[...]
Post by William S Fulton
I would like to start the 3.1 branch by merging in the doxygen
branch. Vadim, any reason not to do this now? Going forwards,
I'll merge master regularly to the 3.1 branch and suggest we
have Travis testing on both master and the 3.1 branch. Unless
there is a lot of developer participation on the 3.1 branch, I
expect it will take a few months to get ready in which case we
may need another one or two 3.0.x releases.
My two cents (and worth no more than that) is that switching to a
more branched development model would be a major improvement to swig
development. At any one point in time there would be exactly two
active branches to check in code to. They would be either the next
"major" release (3.X) or the current bug fix release (3.(X-1).y).

Only bug fixes that are very low risk would go into the 3.(X-1).y
branch. And nothing that even hints and possible backwards
incompatibility would be allowed there. The supported versions of
target languages (python, tcl, perl, etc) could be "locked" for this
branch.

Having a second 3.X branch always around would be a much needed
improvement to swig's development model (as I see it). There would
now be a place for developers to work on new features of target
languages as they evolve. This branch can break backwards
compatibility. Can add new target languages and features. Supported
versions of target languages can change.

Many other projects use such an approach. I think it might make
sense for swig to use one as well. Rather than re-inventing the
wheel, it might be possible to look at how some other projects manage
multiple active branches and adopt one of their methodologies.

Mike
William S Fulton
2017-02-13 20:01:00 UTC
Permalink
The more branched development model is something we ought to consider. It's
pretty much what I want to do over the next few months (master and next
branches). Previously I've steered away from this model as it takes extra
resources to maintain two branches which we don't have. Certainly when we
were using Subversion, but now in hindsight with the move to github and
Travis, it should be a lot easier to maintain two branches. With two
branches, it would certainly be easier to accept more risky changes and pay
proper attention to the next version. This would solve the reservations
I've had for merging risky core changes such as the doxygen improvements
into the stable releases.

William
Post by William S Fulton
+swig-devel list
---------- Forwarded message ----------
Date: 10 February 2017 at 20:38
Subject: [Swig-devel] Radical new approach to development and moving
towards version 3.1 or version 4.0
[...]
Post by William S Fulton
To this end, I propose any sub-standard module will require an
-sub-standard-module-which-can-only-be-used-if-I-help-fix-problems.
My suggestion would be '-experimental'. So a feature/language would
be either experimental or not experimental (standard, regular, fully
supported, etc).
[...]
Post by William S Fulton
I would like to start the 3.1 branch by merging in the doxygen
branch. Vadim, any reason not to do this now? Going forwards,
I'll merge master regularly to the 3.1 branch and suggest we
have Travis testing on both master and the 3.1 branch. Unless
there is a lot of developer participation on the 3.1 branch, I
expect it will take a few months to get ready in which case we
may need another one or two 3.0.x releases.
My two cents (and worth no more than that) is that switching to a
more branched development model would be a major improvement to swig
development. At any one point in time there would be exactly two
active branches to check in code to. They would be either the next
"major" release (3.X) or the current bug fix release (3.(X-1).y).
Only bug fixes that are very low risk would go into the 3.(X-1).y
branch. And nothing that even hints and possible backwards
incompatibility would be allowed there. The supported versions of
target languages (python, tcl, perl, etc) could be "locked" for this
branch.
Having a second 3.X branch always around would be a much needed
improvement to swig's development model (as I see it). There would
now be a place for developers to work on new features of target
languages as they evolve. This branch can break backwards
compatibility. Can add new target languages and features. Supported
versions of target languages can change.
Many other projects use such an approach. I think it might make
sense for swig to use one as well. Rather than re-inventing the
wheel, it might be possible to look at how some other projects manage
multiple active branches and adopt one of their methodologies.
Mike
Vadim Zeitlin
2017-02-11 16:37:11 UTC
Permalink
On Fri, 10 Feb 2017 18:49:58 +0000 William S Fulton <***@fultondesigns.co.uk> wrote:

WSF> I have also sensed some frustration that some of the half finished work
WSF> never makes it into the mainline. I've resisted pulling in some of these
WSF> branches as the quality is sub-standard and not something that we can
WSF> support given how few developers there are. The reality is some of the
WSF> target languages are completely sub-standard too, yet they already exist in
WSF> the released versions and are not removed.

Hello and thanks for starting this discussion!

Yes, at least for me this explains most of the frustration. I love SWIG,
it's a huge time-saver in practice, even if I have to admit that I like it
even more as a conceptual idea of making it possible to use C++ as a lingua
franca. But whatever its merits, it is undeniably not perfect at the
implementation level and I see it as a pragmatic tool first and foremost.
So I don't really understand when half-useful additions are not accepted
just because the other half remains to be done. Let's see the glass as
half-full rather than half-empty!

Of course, let's not get completely carried away neither, there should be
some non-negotiable requirements such as that all changes must:

1. Not break anything, no test suite regressions.
2. Be documented if they add any new features.
3. Have tests for the bugs they fix/features they introduce.
4. Generally make sense.

(with some rare and well-motivated exceptions possible for (1) in case of
intentional compatibility breakage). But beyond this, I guess I just don't
see what should prevent the changes from being accepted (besides the
obvious issues such as you finding time to review them that we can probably
do nothing about).


WSF> I wonder if we should take a new approach going forwards and propose a
WSF> single code base but classify target languages by one of two
WSF> qualities.
WSF>
WSF> 1) First-class
WSF> These would be the target languages that are recognised as good quality and
WSF> fully functional. The classification needs to be backed up by a test-suite
WSF> that works and passes in Travis. Not all features will necessarily be
WSF> available though, eg director or nspace support might be missing.
WSF>
WSF> 2) Sub-standard
WSF> Any language not meeting the good quality label would fall into this
WSF> bracket.

I think more usual terms would be "first tier", "second tier" etc, but
this is something only really useful at the overview level anyhow. I.e., as
you mention, some features such as directors might not be supported by some
languages which are perfectly fine to use otherwise, i.e. if you don't need
directors, making them "first class", but also, at the same time, worse
than "sub-standard" ("unusable"?) if you do need them. And even among
clearly first tier backends such as Java we still have some problems such
as interaction between shared pointers and directors (both of which are
supported, but not at the same time). So while it would be useful to
roughly outline the degree of support for each language in the
documentation, I don't think it's going to change much in practice.

WSF> I feel that it must be clear that a sub-standard language module is not up
WSF> to the expected quality and anyone choosing to use it should not be allowed
WSF> to file bug reports unless accompanied by patches to fix. This way
WSF> expectations are set and these language modules are made available 'as is'
WSF> and encouragement is made to help improve them if anyone wants them.

Sorry, I disagree with this as well. IMHO it can be useful to have open
bugs for known issues too, there is no obligation (it's an open source
project after all) to fix them.


WSF> To this end, I propose any sub-standard module will require an additional
WSF> SWIG flag to make this clear. Something like:
WSF> -sub-standard-module-which-can-only-be-used-if-I-help-fix-problems. This
WSF> will also issue a wordy warning explaining exactly what this means to set
WSF> expectations. The flag should also how make it clear how to get involved in
WSF> order to move it into the first-class category.
WSF>
WSF> This way we don't compromise on the quality of what we currently have and
WSF> we make neglected code more easily available and hopefuly encourage new
WSF> developer participation. Thoughts?

I really don't know whom would this help. Not the developers and not the
maintainer (you), AFAICS: you still would want to check that any PRs even
to "N>1 tier" backends don't break compatibility, are documented etc, so
what exactly do you gain? And I don't see this helping users neither, they
will have seen that the language they use is not in the first tier in the
documentation and having to provide an extra option will just be an
annoyance.

As for the wordy warning: please, just don't do this. People invariably
hate programs giving unavoidable warnings like this and I certainly
wouldn't want to see it in my own build output.


But to return to the main point of this discussion, I'd like to ask what
problem exactly are we trying to solve here? For me the problem is that
getting changes into SWIG is more difficult and takes more time than it
ideally would. The proposals above don't seem to address this (e.g. we
can't declare that Doxygen support is second tier as it is supported only
in first tier languages). So which problem do they help with?


WSF> I would like to start the 3.1 branch by merging in the doxygen branch.
WSF> Vadim, any reason not to do this now?

No, none whatsoever. I will probably do more changes to Java side of
things relatively soon, but I could just as well do them via PRs to the 3.1
branch. I didn't have time to rerun the test suite yet, but I've just
checked that the latest doxygen branch can be merged into master without
conflicts, so there is at least that. TIA!


WSF> Going forwards, I'll merge master regularly to the 3.1 branch

So all the new changes, except the incompatible ones, would still need to
be done on master and not 3.1 branch? This is a bit unusual (I think having
3.0 branch and doing the development on master which will become 3.1 or 4.0
later is much more common), but why not.

WSF> and suggest we have Travis testing on both master and the 3.1 branch.

Yes, absolutely.

WSF> If we do this, we could emphasise the change in approach by calling it
WSF> version 4.0 instead.

Considering the (not so great) difference between SWIG 3.0 and the
previous 2.x releases, I think calling it 4.0 would be justified.

Thanks again for this discussion!
VZ
William S Fulton
2017-02-13 20:01:15 UTC
Permalink
Post by Vadim Zeitlin
On Fri, 10 Feb 2017 18:49:58 +0000 William S Fulton <
WSF> I have also sensed some frustration that some of the half finished work
WSF> never makes it into the mainline. I've resisted pulling in some of these
WSF> branches as the quality is sub-standard and not something that we can
WSF> support given how few developers there are. The reality is some of the
WSF> target languages are completely sub-standard too, yet they already exist in
WSF> the released versions and are not removed.
Hello and thanks for starting this discussion!
Yes, at least for me this explains most of the frustration. I love SWIG,
it's a huge time-saver in practice, even if I have to admit that I like it
even more as a conceptual idea of making it possible to use C++ as a lingua
franca. But whatever its merits, it is undeniably not perfect at the
implementation level and I see it as a pragmatic tool first and foremost.
So I don't really understand when half-useful additions are not accepted
just because the other half remains to be done. Let's see the glass as
half-full rather than half-empty!
Of course, let's not get completely carried away neither, there should be
1. Not break anything, no test suite regressions.
2. Be documented if they add any new features.
3. Have tests for the bugs they fix/features they introduce.
4. Generally make sense.
(with some rare and well-motivated exceptions possible for (1) in case of
intentional compatibility breakage). But beyond this, I guess I just don't
see what should prevent the changes from being accepted (besides the
obvious issues such as you finding time to review them that we can probably
do nothing about).
Yes, but I think these should only apply to changes in the core and the
first-class target languages.
Post by Vadim Zeitlin
WSF> I wonder if we should take a new approach going forwards and propose a
WSF> single code base but classify target languages by one of two
WSF> qualities.
WSF>
WSF> 1) First-class
WSF> These would be the target languages that are recognised as good quality and
WSF> fully functional. The classification needs to be backed up by a test-suite
WSF> that works and passes in Travis. Not all features will necessarily be
WSF> available though, eg director or nspace support might be missing.
WSF>
WSF> 2) Sub-standard
WSF> Any language not meeting the good quality label would fall into this
WSF> bracket.
I think more usual terms would be "first tier", "second tier" etc, but
this is something only really useful at the overview level anyhow. I.e., as
you mention, some features such as directors might not be supported by some
languages which are perfectly fine to use otherwise, i.e. if you don't need
directors, making them "first class", but also, at the same time, worse
than "sub-standard" ("unusable"?) if you do need them. And even among
clearly first tier backends such as Java we still have some problems such
as interaction between shared pointers and directors (both of which are
supported, but not at the same time). So while it would be useful to
roughly outline the degree of support for each language in the
documentation, I don't think it's going to change much in practice.
I'm open to naming them whatever is appropriate. However, to me it is quite
clear that there should be two categories quite simply based around whether
or not the test-suite passes. The reason is firstly that the test-suite
gives a fairly good simple indication of good overall support as it has
comprehensive coverage of features but is also lax enough to not insist on
all features being covered. Secondly, from a practical point of view a
working test-suite means we can be fairly confident about maintaining
quality when patches are supplied (no breakages and asking for a working
test case). Yes, the documentation should clarify the support in detail.
Post by Vadim Zeitlin
WSF> I feel that it must be clear that a sub-standard language module is not up
WSF> to the expected quality and anyone choosing to use it should not be allowed
WSF> to file bug reports unless accompanied by patches to fix. This way
WSF> expectations are set and these language modules are made available 'as is'
WSF> and encouragement is made to help improve them if anyone wants them.
Sorry, I disagree with this as well. IMHO it can be useful to have open
bugs for known issues too, there is no obligation (it's an open source
project after all) to fix them
Without a maintainer for sub-standard languages and the clear advertising
that the target language backend is wholly incomplete, there is little
point in raising a bug for something that is in reality going to be
ignored. There'd easily be hundreds of bugs when they ought to be filed
under one bug, 'language x is wholly inadequate'. We don't have enough
resources to handle bugs that won't be addressed, and my suggested approach
is to make the point that nothing is going to happen unless a user
contributes. The entry level for users to contribute would be very low too
as patches won't need to meet the usual quality standards of having a test
and demonstrating no regressions.
Post by Vadim Zeitlin
WSF> To this end, I propose any sub-standard module will require an additional
WSF> -sub-standard-module-which-can-only-be-used-if-I-help-fix-problems. This
WSF> will also issue a wordy warning explaining exactly what this means to set
WSF> expectations. The flag should also how make it clear how to get involved in
WSF> order to move it into the first-class category.
WSF>
WSF> This way we don't compromise on the quality of what we currently have and
WSF> we make neglected code more easily available and hopefuly encourage new
WSF> developer participation. Thoughts?
I really don't know whom would this help. Not the developers and not the
maintainer (you), AFAICS: you still would want to check that any PRs even
to "N>1 tier" backends don't break compatibility, are documented etc, so
what exactly do you gain? And I don't see this helping users neither, they
will have seen that the language they use is not in the first tier in the
documentation and having to provide an extra option will just be an
annoyance.
As for the wordy warning: please, just don't do this. People invariably
hate programs giving unavoidable warnings like this and I certainly
wouldn't want to see it in my own build output.
The usual warning suppressions would work. I'm keen to make the lack of
quality very very clear.
Post by Vadim Zeitlin
But to return to the main point of this discussion, I'd like to ask what
problem exactly are we trying to solve here? For me the problem is that
getting changes into SWIG is more difficult and takes more time than it
ideally would. The proposals above don't seem to address this (e.g. we
can't declare that Doxygen support is second tier as it is supported only
in first tier languages). So which problem do they help with?
The drivers for this suggestion are to scoop up the half developed target
languages such as the C, Objective-C, hhvm work. It is NOT for dealing with
large changes to the core, such as doxygen. I'd like a clear distinction
between the sub-standard and first-class target languages backends. One
reason is I don't know how to deal with all the patches for the languages
that don't meet your requirement 1. above because they simply do not have a
working test-suite. The idea is to have the sub-standard target languages
where we drop our high quality standards for accepting patches because we
do not guarantee any kind of backwards compatibility. There would also no
guarantee that they will work whatsoever because 1) We know they are barely
functional 2) We cannot test them. I suggest we apply this flag to any new
languages submitted that do not fully pass the test-suite and to the
current list of languages where the test-suite does not work, that is:

Allegrocl, chicken, clisp, cffi, modula3, mzscheme, ocaml, pike, exp, uffi.

The following branches may be in a better state than the above target
languages and are candidates for inclusion as sub-standard:

- all426-fortran
- gsoc2008-jezabek (COM)
- gsoc2012-c
- gsoc2012-objc
- gsoc2016-hhvm
- https://github.com/glycerine/swig-cpp-to-cffi-for-common-lisp-enhancements

The fact that some of these branches are in a better state than the list of
sub-standard languages already in SWIG irks me. I have wondered if we
should drop sub-standard languages instead or make them a configure time
inclusion. Anyway, I'm proposing to treat half-finished work more
consistently going forwards. The reason for including them with an annoying
flag and warning is to clearly delineate good quality from bad quality but
at the same time their inclusion will make them more accessible with the
hope that they will get more attention for better development.

I've also been looking at the matlab work at https://github.com/jaeandersso
n/swig and it looks okay as a first-class language.

The main reason for having the annoying flag and warning is to make it
blatantly clear that the target language offers very little and as
development has (most likely) been abandoned, you should only use it if you
are prepared to pick up the development of it. By including it in the main
code base and relaxed standards for accepting patches, it ought to remove
any obstacles for development. If someone starts to pick up development,
we'd encourage them to getting the test-suite working and work towards a
first-class language.

I also do not want to be associated with a project with low standards and
so the current standards for accepting changes for the first-class
languages will continue as is.
Post by Vadim Zeitlin
WSF> If we do this, we could emphasise the change in approach by calling it
WSF> version 4.0 instead.
Considering the (not so great) difference between SWIG 3.0 and the
previous 2.x releases, I think calling it 4.0 would be justified.
The main reason we issued a major release for 2.0 was due to the license
change and for 3.0 was for additional C++ features C++11 and nested
classes. Doxygen documentation support is a fairly big feature, so could
be 3.1 or 4.0. If we go for the dual classification of the target
languages, it feels like more of a major 4.0 release.

Thanks again for this discussion!
Post by Vadim Zeitlin
SWIG has many contributors providing small incremental improvements and
this is great. However, SWIG is one of really big open source projects in
the world but has too few committed developers. The discussion really is
about how best those committed developers can spend their precious time and
also encourage new developers/make it easy to contribute. I see these as
the main problems with SWiG at the moment and this discussion is really for
ideas to overcome this problem. And finally, most importantly, the users
are the most important and expect high standards and backwards
compatibility in the main (first-class) languages and the quality of these
should not be compromised.

William
Vadim Zeitlin
2017-02-13 22:23:51 UTC
Permalink
Hello again,

On Mon, 13 Feb 2017 20:01:15 +0000 William S Fulton <***@fultondesigns.co.uk> wrote:

WSF> On 11 February 2017 at 16:37, Vadim Zeitlin <vz-***@zeitlins.org> wrote:
...
WSF> > Of course, let's not get completely carried away neither, there should be
WSF> > some non-negotiable requirements such as that all changes must:
WSF> >
WSF> > 1. Not break anything, no test suite regressions.
WSF> > 2. Be documented if they add any new features.
WSF> > 3. Have tests for the bugs they fix/features they introduce.
WSF> > 4. Generally make sense.
...
WSF> Yes, but I think these should only apply to changes in the core and the
WSF> first-class target languages.

Are you really sure you want to relax these requirements for the second
tier languages? I really don't think it's a good idea. As an example,
suppose I manage to advance my work on the "C" branch far enough that it
becomes suitable for my own project (in reality I haven't had any time to
even touch it since 6 months again, but let's dream for a moment). It still
won't be ready to become first-class target language as a lot of things
will be missing and half of the test suite will still remain broken.
However I would very much appreciate that the half which does pass
continues to pass and that people don't add random hacks without any
documentation as it has happened in the past with the Python module, for
example, resulting in the mess of all the different options that we have
now.

Having different rules for different parts of the code base is also going
to be confusing for the contributors, whereas the simple rules above are
quite clear and widely used, so I think they would be much simpler to
explain.

To reiterate, I strongly believe that the rules should be the same for
everything, regardless of the class/tier/whatever.


WSF> I'm open to naming them whatever is appropriate. However, to me it is quite
WSF> clear that there should be two categories quite simply based around whether
WSF> or not the test-suite passes.

I think it would be more fruitful to make the test suite pass by disabling
the currently broken tests, as I did in the "C" branch. Ideal would be to
mark the failing tests as "xfail", i.e. expected-to-fail, but this would
require some changes to the makefiles.

WSF> The reason is firstly that the test-suite gives a fairly good simple
WSF> indication of good overall support as it has comprehensive coverage of
WSF> features but is also lax enough to not insist on all features being
WSF> covered. Secondly, from a practical point of view a working test-suite
WSF> means we can be fairly confident about maintaining quality when
WSF> patches are supplied (no breakages and asking for a working test
WSF> case).

Finally I've found something that I can agree with wholeheartedly :-) Yes,
I totally share your thoughts about this and this is exactly why I believe
all changes must preserve the currently passing tests, whatever the tier of
the language.

WSF> Without a maintainer for sub-standard languages and the clear advertising
WSF> that the target language backend is wholly incomplete, there is little
WSF> point in raising a bug for something that is in reality going to be
WSF> ignored. There'd easily be hundreds of bugs when they ought to be filed
WSF> under one bug, 'language x is wholly inadequate'. We don't have enough
WSF> resources to handle bugs that won't be addressed,

I agree with you if we assume that we have the goal of clearing the
backlog of bugs or even just keeping its size under control. However I
don't know if it's a realistic goal even if we limit it to first-tier
languages. And if not, why should it matter if we have extra bugs there?
They could be useful to someone thinking about maintaining some module and
I just don't see any harm in having them.

WSF> and my suggested approach is to make the point that nothing is going
WSF> to happen unless a user contributes.

Yes, sure.

WSF> The entry level for users to contribute would be very low too as
WSF> patches won't need to meet the usual quality standards of having a
WSF> test and demonstrating no regressions.

But I still disagree very much with this.


WSF> The drivers for this suggestion are to scoop up the half developed target
WSF> languages such as the C, Objective-C, hhvm work. It is NOT for dealing with
WSF> large changes to the core, such as doxygen. I'd like a clear distinction
WSF> between the sub-standard and first-class target languages backends. One
WSF> reason is I don't know how to deal with all the patches for the languages
WSF> that don't meet your requirement 1. above because they simply do not have a
WSF> working test-suite.

I hope my suggestion above answers this. True, we still need somebody to
remove all the failing tests, i.e. create the lists of FAILING_{C,CPP}_TESTS
at in https://github.com/vadz/swig/blob/C/Examples/test-suite/c/Makefile.in#L30,
and I don't really volunteer for this, but it shouldn't be that difficult.
In the absolutely worst case we could just copy {C,CPP}_TEST_CASES to these
variables although if we can't find any passing tests for some language
this does raise the question of whether it's useful to keep it at all.

We could even mostly automate things by introducing xfail tests machinery,
which would be useful anyhow (and I might volunteer for this...) and
starting with FAILING_X_TESTS==X_TEST_CASES and then removing the tests
that unexpectedly passed from it.

WSF> The idea is to have the sub-standard target languages where we drop
WSF> our high quality standards for accepting patches because we do not
WSF> guarantee any kind of backwards compatibility.

BTW, I agree that backwards compatibility requirements could be relaxed
for the second-tier languages. But I'd still really, really like to keep
the absence of regressions and the documentation requirements.

WSF> 2) We cannot test them.

If we can't test them at all, chances are they don't work at all neither.
In this case it might be better to just drop support for them.

WSF> I suggest we apply this flag to any new languages submitted that do
WSF> not fully pass the test-suite and to the current list of languages
WSF> where the test-suite does not work, that is:
WSF>
WSF> Allegrocl, chicken, clisp, cffi, modula3, mzscheme, ocaml, pike, exp, uffi.

I have no experience with any of these languages (and in fact don't even
know about half of them), so I can't really comment on any of them...

WSF> The following branches may be in a better state than the above target
WSF> languages and are candidates for inclusion as sub-standard:
WSF>
WSF> - all426-fortran
WSF> - gsoc2008-jezabek (COM)
WSF> - gsoc2012-c

Notice that "C" branch in my fork is significantly different from this
branch and I also have some changes to the "COM" branch, although I
abandoned working on it as we decided to not use COM directly any longer.

WSF> The main reason for having the annoying flag and warning is to make it
WSF> blatantly clear that the target language offers very little and as
WSF> development has (most likely) been abandoned, you should only use it if you
WSF> are prepared to pick up the development of it. By including it in the main
WSF> code base and relaxed standards for accepting patches, it ought to remove
WSF> any obstacles for development. If someone starts to pick up development,
WSF> we'd encourage them to getting the test-suite working and work towards a
WSF> first-class language.

Sorry, I'm not convinced. If/when someone starts working on some module,
the very first thing to do would be to ensure that the tests can be run for
it and then we shouldn't accept any regressions. Otherwise we simply won't
ever be sure whether we're making any progress at all, especially for the
languages that most of SWIG contributors don't even know.

WSF> The discussion really is about how best those committed developers can
WSF> spend their precious time and also encourage new developers/make it
WSF> easy to contribute. I see these as the main problems with SWiG at the
WSF> moment and this discussion is really for ideas to overcome this
WSF> problem.

I honestly think that using the same requirements for all contributions to
the project would serve this goal better than having different standards. I
would drop all the modules that don't have any functional test suite at
all, make test suite pass for all the other ones by excluding all currently
failing tests from it, and keep requiring that it passes.

Regards,
VZ
William S Fulton
2017-02-14 21:06:03 UTC
Permalink
Post by Vadim Zeitlin
Hello again,
On Mon, 13 Feb 2017 20:01:15 +0000 William S Fulton <
...
WSF> > Of course, let's not get completely carried away neither, there should be
WSF> >
WSF> > 1. Not break anything, no test suite regressions.
WSF> > 2. Be documented if they add any new features.
WSF> > 3. Have tests for the bugs they fix/features they introduce.
WSF> > 4. Generally make sense.
...
WSF> Yes, but I think these should only apply to changes in the core and the
WSF> first-class target languages.
Are you really sure you want to relax these requirements for the second
tier languages? I really don't think it's a good idea. As an example,
suppose I manage to advance my work on the "C" branch far enough that it
becomes suitable for my own project (in reality I haven't had any time to
even touch it since 6 months again, but let's dream for a moment). It still
won't be ready to become first-class target language as a lot of things
will be missing and half of the test suite will still remain broken.
However I would very much appreciate that the half which does pass
continues to pass and that people don't add random hacks without any
documentation as it has happened in the past with the Python module, for
example, resulting in the mess of all the different options that we have
now.
Having different rules for different parts of the code base is also going
to be confusing for the contributors, whereas the simple rules above are
quite clear and widely used, so I think they would be much simpler to
explain.
To reiterate, I strongly believe that the rules should be the same for
everything, regardless of the class/tier/whatever.
WSF> I'm open to naming them whatever is appropriate. However, to me it is quite
WSF> clear that there should be two categories quite simply based around whether
WSF> or not the test-suite passes.
I think it would be more fruitful to make the test suite pass by disabling
the currently broken tests, as I did in the "C" branch. Ideal would be to
mark the failing tests as "xfail", i.e. expected-to-fail, but this would
require some changes to the makefiles.
WSF> The reason is firstly that the test-suite gives a fairly good simple
WSF> indication of good overall support as it has comprehensive coverage of
WSF> features but is also lax enough to not insist on all features being
WSF> covered. Secondly, from a practical point of view a working test-suite
WSF> means we can be fairly confident about maintaining quality when
WSF> patches are supplied (no breakages and asking for a working test
WSF> case).
Finally I've found something that I can agree with wholeheartedly :-) Yes,
I totally share your thoughts about this and this is exactly why I believe
all changes must preserve the currently passing tests, whatever the tier of
the language.
WSF> Without a maintainer for sub-standard languages and the clear advertising
WSF> that the target language backend is wholly incomplete, there is little
WSF> point in raising a bug for something that is in reality going to be
WSF> ignored. There'd easily be hundreds of bugs when they ought to be filed
WSF> under one bug, 'language x is wholly inadequate'. We don't have enough
WSF> resources to handle bugs that won't be addressed,
I agree with you if we assume that we have the goal of clearing the
backlog of bugs or even just keeping its size under control. However I
don't know if it's a realistic goal even if we limit it to first-tier
languages. And if not, why should it matter if we have extra bugs there?
They could be useful to someone thinking about maintaining some module and
I just don't see any harm in having them.
WSF> and my suggested approach is to make the point that nothing is going
WSF> to happen unless a user contributes.
Yes, sure.
WSF> The entry level for users to contribute would be very low too as
WSF> patches won't need to meet the usual quality standards of having a
WSF> test and demonstrating no regressions.
But I still disagree very much with this.
WSF> The drivers for this suggestion are to scoop up the half developed target
WSF> languages such as the C, Objective-C, hhvm work. It is NOT for dealing with
WSF> large changes to the core, such as doxygen. I'd like a clear distinction
WSF> between the sub-standard and first-class target languages backends. One
WSF> reason is I don't know how to deal with all the patches for the languages
WSF> that don't meet your requirement 1. above because they simply do not have a
WSF> working test-suite.
I hope my suggestion above answers this. True, we still need somebody to
remove all the failing tests, i.e. create the lists of
FAILING_{C,CPP}_TESTS
at in https://github.com/vadz/swig/blob/C/Examples/test-suite/c/
Makefile.in#L30,
and I don't really volunteer for this, but it shouldn't be that difficult.
In the absolutely worst case we could just copy {C,CPP}_TEST_CASES to these
variables although if we can't find any passing tests for some language
this does raise the question of whether it's useful to keep it at all.
We could even mostly automate things by introducing xfail tests machinery,
which would be useful anyhow (and I might volunteer for this...) and
starting with FAILING_X_TESTS==X_TEST_CASES and then removing the tests
that unexpectedly passed from it.
WSF> The idea is to have the sub-standard target languages where we drop
WSF> our high quality standards for accepting patches because we do not
WSF> guarantee any kind of backwards compatibility.
BTW, I agree that backwards compatibility requirements could be relaxed
for the second-tier languages. But I'd still really, really like to keep
the absence of regressions and the documentation requirements.
WSF> 2) We cannot test them.
If we can't test them at all, chances are they don't work at all neither.
In this case it might be better to just drop support for them.
WSF> I suggest we apply this flag to any new languages submitted that do
WSF> not fully pass the test-suite and to the current list of languages
WSF>
WSF> Allegrocl, chicken, clisp, cffi, modula3, mzscheme, ocaml, pike, exp, uffi.
I have no experience with any of these languages (and in fact don't even
know about half of them), so I can't really comment on any of them...
WSF> The following branches may be in a better state than the above target
WSF>
WSF> - all426-fortran
WSF> - gsoc2008-jezabek (COM)
WSF> - gsoc2012-c
Notice that "C" branch in my fork is significantly different from this
branch and I also have some changes to the "COM" branch, although I
abandoned working on it as we decided to not use COM directly any longer.
WSF> The main reason for having the annoying flag and warning is to make it
WSF> blatantly clear that the target language offers very little and as
WSF> development has (most likely) been abandoned, you should only use it if you
WSF> are prepared to pick up the development of it. By including it in the main
WSF> code base and relaxed standards for accepting patches, it ought to remove
WSF> any obstacles for development. If someone starts to pick up development,
WSF> we'd encourage them to getting the test-suite working and work towards a
WSF> first-class language.
Sorry, I'm not convinced. If/when someone starts working on some module,
the very first thing to do would be to ensure that the tests can be run for
it and then we shouldn't accept any regressions. Otherwise we simply won't
ever be sure whether we're making any progress at all, especially for the
languages that most of SWIG contributors don't even know.
WSF> The discussion really is about how best those committed developers can
WSF> spend their precious time and also encourage new developers/make it
WSF> easy to contribute. I see these as the main problems with SWiG at the
WSF> moment and this discussion is really for ideas to overcome this
WSF> problem.
I honestly think that using the same requirements for all contributions to
the project would serve this goal better than having different standards. I
would drop all the modules that don't have any functional test suite at
all, make test suite pass for all the other ones by excluding all currently
failing tests from it, and keep requiring that it passes.
This are all fair points if we have testable code. I think you've
misunderstood the problem I'm suggesting we solve though. A lot of the
sub-standard languages don't have any test-suite whatsoever. I'm trying to
find a solution for these really rather shocking modules. Take the Fortran
module, it has hardly anything in it. Should we pretend it never happened
and get rid of it? If so, why should we keep the Modula3 module, which is
already included but is in a similar hardly developed state. And then we
have a number of other such as Pike which has a test-suite but I've no idea
how to make it work. Then we have the cffi module with quite a few patches,
but again, no idea how to make the test-suite work. What's your solution to
dealing with all these different modules?

Modifying the test-suite to be more accommodating to nearly working modules
would be nice. However, my gut feeling is that once a test goes into the
not working category, it will never make its way out and hence for the last
few years, I've been fairly strict on accepting new modules that have all
the tests passing.

William
Vadim Zeitlin
2017-02-15 15:59:49 UTC
Permalink
On Tue, 14 Feb 2017 21:06:03 +0000 William S Fulton <***@fultondesigns.co.uk> wrote:

WSF> This are all fair points if we have testable code. I think you've
WSF> misunderstood the problem I'm suggesting we solve though. A lot of the
WSF> sub-standard languages don't have any test-suite whatsoever. I'm trying to
WSF> find a solution for these really rather shocking modules. Take the Fortran
WSF> module, it has hardly anything in it. Should we pretend it never happened
WSF> and get rid of it? If so, why should we keep the Modula3 module, which is
WSF> already included but is in a similar hardly developed state. And then we
WSF> have a number of other such as Pike which has a test-suite but I've no idea
WSF> how to make it work. Then we have the cffi module with quite a few patches,
WSF> but again, no idea how to make the test-suite work. What's your solution to
WSF> dealing with all these different modules?

Sorry, just a quick reply to say that I understand your point but feel
like I have to spend some time on actually examining the current state of
the test suite in these backends before answering -- and I just don't have
this time right now. Hopefully I'll find some towards the end of the week.

My global approach would be to try to apply reasonable efforts to salvage
what can be salvaged and drop all the rest. But perhaps this is too
extremist.

WSF> Modifying the test-suite to be more accommodating to nearly working modules
WSF> would be nice. However, my gut feeling is that once a test goes into the
WSF> not working category, it will never make its way out and hence for the last
WSF> few years, I've been fairly strict on accepting new modules that have all
WSF> the tests passing.

Making all the tests pass is a very high bar. Again, a module can be
useful even if it doesn't support directors. Or even if it doesn't support
std::shared_ptr<> (which is crucial for me personally, so I give it as an
example just to avoid the appearance of bias). IMO as soon as we have a
reasonable number of passing tests and minimal amount of documentation,
there is no real reason not to add the new module to SWIG and continue
improving it later.

Regards,
VZ
Vadim Zeitlin
2017-02-20 01:03:28 UTC
Permalink
On Tue, 14 Feb 2017 21:06:03 +0000 William S Fulton <***@fultondesigns.co.uk> wrote:

WSF> I think you've misunderstood the problem I'm suggesting we solve
WSF> though. A lot of the sub-standard languages don't have any test-suite
WSF> whatsoever.

Let's look at each language in more details:

Language(s) Last Maintainer Test Suite[2] Documentation
Change[1]
----------------------------------------------------------------------------
C# 2017-01-27 CI Yes
D 2014-10-30 CI Yes
Go 2016-10-11 CI Yes
Guile 2013-04-28 CI Yes
Java 2017-01-22 CI Yes
JavaScript 2014-05-19 CI Yes
Lisp (GNU) 2005-12-27 Bad Minimal
Lisp (Allegro) 2011-06-21 Bad Yes
List (S-Exp) Antiquity No No
Lua 2014-04-23 CI Yes
Modula 3 2005-09-29 No Yes
OCaml 2006-11-03 Fail Yes
Octave 2014-10-05 Fail Yes
PHP 2016-12-30 CI Yes
Perl 2013-11-14 CI Yes
Pike 2004-10-16 Bad Minimal
Python 2016-12-03 CI Yes
R 2016-11-12 CI Minimal
Racket (MzScheme) 2006-07-09 Fail Minimal
Ruby 2016-05-27 CI Yes
Scheme (CHICKEN) 2005-02-01 Fail Yes
Scilab 2016-07-29 CI Yes
TCL 2006-02-09 CI Yes

[1] "Last Maintainer Change" column contains the date of the last
non-trivial change done by the language maintainer (this is mostly to
not count pure maintenance changes by William and Olly, although I do
count William's changes for C#, Python and Ruby, which he seems to be
maintaining as well, and Olly's changes for PHP). "Antiquity" refers to
the period before the global merges/moves (not tracked by cvs in use
back then) done at the end of 2002. I didn't bother manually tracking
further back in time, no changes since 15 years is a good enough proof
of being dead to me.

[2] "Test Suite" column contains "CI" if the language is already tested by
Travis-CI. "Fail" means that either it is being already tested by
Travis-CI but marked as "allow failures" or that I tested it myself
(mzscheme) and at least some tests pass -- hopefully this means that it
could be made to pass by marking some tests as failing and so be
checked by CI. "Bad" means that the test suite doesn't really exist,
i.e. there are no language-specific tests and the generic tests don't
pass anyhow. "No" means that no tests can be run for this language at
all, i.e. there is even no makefile support.


I think that looking at the above it's pretty obvious that GNU Common
Lisp, S-Expr, Modula 3 and Pike modules should be just removed, there is
really not much useful in practice there and nobody works on them since a
long time. Allegro CL might be salvageable but I'd remove it as well.

Scheme languages look better superficially, but CHICKEN plans dropping
support for SWIG in the next version, see
https://lists.gnu.org/archive/html/chicken-users/2016-10/msg00018.html
and Racket is now a different language than "plain" Scheme, so I don't
think it's going to be simple to update 10+ old code to work with it.
IOW, I would drop those as well, even though they do have some test suite.

So, all being said, I agree with you: it's not worth making the test suite
pass for any of the languages not being currently tested. But taking into
account the table and the remarks above, I think it makes much more sense
to drop them entirely in 4.0 release rather than to keep them as zombies in
the tree.

The bad news is that I suggest dropping 30% of the languages supported by
SWIG. The good news is that the rest of them seems to be in a decent shape
and we should be able to move forward with less friction if we do this.

What do you think?
VZ
William S Fulton
2017-02-21 08:20:30 UTC
Permalink
Post by Vadim Zeitlin
On Tue, 14 Feb 2017 21:06:03 +0000 William S Fulton <
WSF> I think you've misunderstood the problem I'm suggesting we solve
WSF> though. A lot of the sub-standard languages don't have any test-suite
WSF> whatsoever.
Language(s) Last Maintainer Test Suite[2] Documentation
Change[1]
------------------------------------------------------------
----------------
C# 2017-01-27 CI Yes
D 2014-10-30 CI Yes
Go 2016-10-11 CI Yes
Guile 2013-04-28 CI Yes
Java 2017-01-22 CI Yes
JavaScript 2014-05-19 CI Yes
Lisp (GNU) 2005-12-27 Bad Minimal
Lisp (Allegro) 2011-06-21 Bad Yes
List (S-Exp) Antiquity No No
Lua 2014-04-23 CI Yes
Modula 3 2005-09-29 No Yes
OCaml 2006-11-03 Fail Yes
Octave 2014-10-05 Fail Yes
PHP 2016-12-30 CI Yes
Perl 2013-11-14 CI Yes
Pike 2004-10-16 Bad Minimal
Python 2016-12-03 CI Yes
R 2016-11-12 CI Minimal
Racket (MzScheme) 2006-07-09 Fail Minimal
Ruby 2016-05-27 CI Yes
Scheme (CHICKEN) 2005-02-01 Fail Yes
Scilab 2016-07-29 CI Yes
TCL 2006-02-09 CI Yes
[1] "Last Maintainer Change" column contains the date of the last
non-trivial change done by the language maintainer (this is mostly to
not count pure maintenance changes by William and Olly, although I do
count William's changes for C#, Python and Ruby, which he seems to be
maintaining as well, and Olly's changes for PHP). "Antiquity" refers to
the period before the global merges/moves (not tracked by cvs in use
back then) done at the end of 2002. I didn't bother manually tracking
further back in time, no changes since 15 years is a good enough proof
of being dead to me.
[2] "Test Suite" column contains "CI" if the language is already tested by
Travis-CI. "Fail" means that either it is being already tested by
Travis-CI but marked as "allow failures" or that I tested it myself
(mzscheme) and at least some tests pass -- hopefully this means that it
could be made to pass by marking some tests as failing and so be
checked by CI. "Bad" means that the test suite doesn't really exist,
i.e. there are no language-specific tests and the generic tests don't
pass anyhow. "No" means that no tests can be run for this language at
all, i.e. there is even no makefile support.
I think that looking at the above it's pretty obvious that GNU Common
Lisp, S-Expr, Modula 3 and Pike modules should be just removed, there is
really not much useful in practice there and nobody works on them since a
long time. Allegro CL might be salvageable but I'd remove it as well.
Scheme languages look better superficially, but CHICKEN plans dropping
support for SWIG in the next version, see
https://lists.gnu.org/archive/html/chicken-users/2016-10/msg00018.html
and Racket is now a different language than "plain" Scheme, so I don't
think it's going to be simple to update 10+ old code to work with it.
IOW, I would drop those as well, even though they do have some test suite.
So, all being said, I agree with you: it's not worth making the test suite
pass for any of the languages not being currently tested. But taking into
account the table and the remarks above, I think it makes much more sense
to drop them entirely in 4.0 release rather than to keep them as zombies in
the tree.
The bad news is that I suggest dropping 30% of the languages supported by
SWIG. The good news is that the rest of them seems to be in a decent shape
and we should be able to move forward with less friction if we do this.
What do you think?
It isn't that black and white and the modules that aren't in master have
been overlooked. The primary purpose of classifying modules as
standard/experimental etc is to encourage availability and further
development on them. I don't have a problem dropping the useless/clearly
abandoned modules you've helped identify, but I don't feel you've addressed
the main purpose of my original email to fundamentally address the modules
that are not in a quite good enough state but are potentially useful.

In more detail...

The table above is missing CFFI and UFFI and they have test-suites, but
probably classified as 'bad'. There are users using CFFI and interest in
updating it from the glycerine fork, see
https://github.com/swig/swig/issues/877. Probably CFFI will become 'Fail'
with this work.

The Octave test-suite has been working for a few years. A few days ago a
bug/glitch in Octave 4.2 testing appeared, but as far as I'm concerned, the
test-suite is in a good shape for Octave.

Then we have the other languages I think are sub-standard and potential
candidates to include:

- all426-fortran
- gsoc2008-jezabek (COM)
- gsoc2012-c (or VZ fork)
- gsoc2012-objc
- gsoc2016-hhvm
- matlab

Apart from matlab, these are probably 'fail'. So an updated table including
the branch names for updates/new modules and a status column is:

Language(s) Branch[3] Last Maintainer Test Suite[2]
Documentation SWIG4

Change[1] Status[4]
------------------------------------------------------------------------------------------
C vadz/C 2016-09-15 Fail
Yes Experimental
C# 2017-01-27 CI Yes
COM gsoc2008-jezabek 2009-09-04 Fail
Yes Experimental
D 2014-10-30 CI Yes
Fortran all426-fortran 2010-07-21 No
No Delete
Go 2016-10-11 CI Yes
Guile 2013-04-28 CI Yes
HHVM gsoc2016-hhvm 2016-06-17 No
No Delete
Java 2017-01-22 CI Yes
JavaScript 2014-05-19 CI Yes
Lisp (Allegro) 2011-06-21 Bad
Yes Experimental
Lisp (GNU) 2005-12-27 Bad
Minimal Delete
Lisp (CFFI) glycerine/swi... 2011-04-15 Bad/Fail
Minimal Experimental
Lisp (UFFI) 2005-08-09 Bad
No Delete
List (S-Exp) Antiquity No
No Delete
Lua 2014-04-23 CI Yes
Matlab matlab 2017-02-17 CI
No Experimental
Modula 3 2005-09-29 No Yes
Objective-C gsoc2012-obj 2012-08-20 No
No Experimental
OCaml 2006-11-03 Fail
Yes Experimental
Octave 2014-10-05 CI Yes
PHP 2016-12-30 CI Yes
Perl 2013-11-14 CI Yes
Pike 2004-10-16 Bad
Minimal Delete
Python 2016-12-03 CI Yes
R 2016-11-12 CI Minimal
Racket (MzScheme) 2006-07-09 Fail
Minimal Experimental
Ruby 2016-05-27 CI Yes
Scheme (CHICKEN) 2005-02-01 Fail
Yes Experimental
Scilab 2016-07-29 CI Yes
TCL 2006-02-09 CI Yes

[3] master if not specified
[4] 1st class/standard module if not specified

I've added experimental for the sub-standard languages and proposed
deleting some of them. Deleted modules are mostly where test-suite is
non-existent or 'Bad'.

I've labelled Matlab as experimental until documentation is available.
Unless anyone thinks otherwise, HHVM development seemed promising, but
seems to have stalled at an early stage so propose removing the branch.
Although objective C has no test-suite I think I can add this in to bring
it up to experimental status... it is quite an important language so would
like to help its chance of future success.

I think we can debate which languages are experimental or to be deleted for
SWIG 4, but am keen to establish the concept of distinguishing
experimental/sub-standard modules from those that fully pass the
test-suite. The important question is: What reasons, if any, are there for
not having experimental languages?

William
Olly Betts
2017-04-12 03:41:30 UTC
Permalink
Post by William S Fulton
Unless anyone thinks otherwise, HHVM development seemed promising, but
seems to have stalled at an early stage so propose removing the branch.
The HHVM branch actually satisfied all the merge criteria except for
documentation. Nishant said he'd try to sort that out, but he's now
working for facebook and seems rather lacking in free time lately.

I think it would actually be a prime candidate for getting onto master
in any sane new world order.

Cheers,
Olly
William Fulton
2017-02-21 09:14:49 UTC
Permalink
On 13 Feb 2017 10:24 p.m., "Vadim Zeitlin" <vz-***@zeitlins.org> wrote:

Hello again,

On Mon, 13 Feb 2017 20:01:15 +0000 William S Fulton <***@fultondesigns.co.uk>
wrote:

WSF> On 11 February 2017 at 16:37, Vadim Zeitlin <vz-***@zeitlins.org>
wrote:
...
WSF> > Of course, let's not get completely carried away neither, there
should be
WSF> > some non-negotiable requirements such as that all changes must:
WSF> >
WSF> > 1. Not break anything, no test suite regressions.
WSF> > 2. Be documented if they add any new features.
WSF> > 3. Have tests for the bugs they fix/features they introduce.
WSF> > 4. Generally make sense.
...
WSF> Yes, but I think these should only apply to changes in the core and the
WSF> first-class target languages.

Are you really sure you want to relax these requirements for the second
tier languages? I really don't think it's a good idea. As an example,
suppose I manage to advance my work on the "C" branch far enough that it
becomes suitable for my own project (in reality I haven't had any time to
even touch it since 6 months again, but let's dream for a moment). It still
won't be ready to become first-class target language as a lot of things
will be missing and half of the test suite will still remain broken.
However I would very much appreciate that the half which does pass
continues to pass and that people don't add random hacks without any
documentation as it has happened in the past with the Python module, for
example, resulting in the mess of all the different options that we have
now.

Having different rules for different parts of the code base is also going
to be confusing for the contributors, whereas the simple rules above are
quite clear and widely used, so I think they would be much simpler to
explain.

To reiterate, I strongly believe that the rules should be the same for
everything, regardless of the class/tier/whatever.


WSF> I'm open to naming them whatever is appropriate. However, to me it is
quite
WSF> clear that there should be two categories quite simply based around
whether
WSF> or not the test-suite passes.

I think it would be more fruitful to make the test suite pass by disabling
the currently broken tests, as I did in the "C" branch. Ideal would be to
mark the failing tests as "xfail", i.e. expected-to-fail, but this would
require some changes to the makefiles.



WSF> The entry level for users to contribute would be very low too as
WSF> patches won't need to meet the usual quality standards of having a
WSF> test and demonstrating no regressions.

But I still disagree very much with this.


WSF> The drivers for this suggestion are to scoop up the half developed
target
WSF> languages such as the C, Objective-C, hhvm work. It is NOT for dealing
with
WSF> large changes to the core, such as doxygen. I'd like a clear
distinction
WSF> between the sub-standard and first-class target languages backends. One
WSF> reason is I don't know how to deal with all the patches for the
languages
WSF> that don't meet your requirement 1. above because they simply do not
have a
WSF> working test-suite.

I hope my suggestion above answers this. True, we still need somebody to
remove all the failing tests, i.e. create the lists of FAILING_{C,CPP}_TESTS
at in https://github.com/vadz/swig/blob/C/Examples/test-suite/c/
Makefile.in#L30,
and I don't really volunteer for this, but it shouldn't be that difficult.
In the absolutely worst case we could just copy {C,CPP}_TEST_CASES to these
variables although if we can't find any passing tests for some language
this does raise the question of whether it's useful to keep it at all.

We could even mostly automate things by introducing xfail tests machinery,
which would be useful anyhow (and I might volunteer for this...) and
starting with FAILING_X_TESTS==X_TEST_CASES and then removing the tests
that unexpectedly passed from it.

WSF> The idea is to have the sub-standard target languages where we drop
WSF> our high quality standards for accepting patches because we do not
WSF> guarantee any kind of backwards compatibility.

BTW, I agree that backwards compatibility requirements could be relaxed
for the second-tier languages. But I'd still really, really like to keep
the absence of regressions and the documentation requirements.



I honestly think that using the same requirements for all contributions to
the project would serve this goal better than having different standards. I
would drop all the modules that don't have any functional test suite at
all, make test suite pass for all the other ones by excluding all currently
failing tests from it, and keep requiring that it passes.


I too would like to keep the same high standards for accepting patches for
the sub-standard modules. However I see practical problems. While the idea
of adding all the failing test cases for these modules into a list of known
failed test cases, this will become unmanageable for normal development.
The current approach is to write a test case and test it with a couple of
languages... Usually Java and Python as these cover the two main internal
differences in implementation. The test case is then added into
common.mk... Nice and simple and this nearly always works for all the other
languages. With the experimental languages the chances are the test case
will fail and so loads of makefiles need modifying with the failed test
case. Plus these languages are not properly set up in the test suite to
test easily. Most patch requests will show as failed for the same reason
and we have enough problems already with false failures. So I don't want to
be maintaining lots of lists of failed test cases. I'd rather the
experimental languages were legitimately allowed to fail. Hence they need
to be treated differently. Unless there are other ideas to solve this?

William
Vadim Zeitlin
2017-02-21 17:49:29 UTC
Permalink
On Tue, 21 Feb 2017 08:20:30 +0000 William S Fulton <***@fultondesigns.co.uk> wrote:

WSF> It isn't that black and white and the modules that aren't in master have
WSF> been overlooked.

Yes, sorry, I didn't include them, but if we could agree on the policy for
the languages currently present in master, it would also apply to any new
additions.

WSF> the main purpose of my original email to fundamentally address the modules
WSF> that are not in a quite good enough state but are potentially useful.

This is because I thought there would be no modules in this state, i.e.
"potentially useful" but not having any test suite with at least some tests
passing. I'm still not sure if the modules without tests can be really
useful, IMO they will inevitably bitrot anyhow, but I can't commit to
creating a test suite for half a dozen languages, so I'll just have to shut
up.

WSF> I think we can debate which languages are experimental or to be deleted for
WSF> SWIG 4, but am keen to establish the concept of distinguishing
WSF> experimental/sub-standard modules from those that fully pass the
WSF> test-suite. The important question is: What reasons, if any, are there for
WSF> not having experimental languages?

My objections are:

- Fully supported/experimental statuses are insufficient: this is my main
concern as C module, which is the one I'm most interested in in this
discussion, doesn't fall into either category. It won't cover 100% of
SWIG functionality in the foreseeable future, but I do want to run its
tests on Travis CI to prevent it from being broken. So we will need at
least 3 different categories of modules instead of just a single one as
I'd prefer.

- Concerns about SWIG quality: if someone sees Objective C as a supported
module and then realizes that no code produced by it compiles, it's not
going to be a good experience, even if the module is clearly marked as
experimental. I feel like the test suite is required to provide at least
some basic degree of confidence in the module suitability.

- Less clarity for contributors: we'll have to tell people that changes to
this and that language must include tests updates, but changes to some
others don't need them.

- Less motivation for "experimental" module developers to implement the
minimal required functionality for running at least part of SWIG test
suite.


WSF> I too would like to keep the same high standards for accepting patches for
WSF> the sub-standard modules. However I see practical problems. While the idea
WSF> of adding all the failing test cases for these modules into a list of known
WSF> failed test cases, this will become unmanageable for normal development.
WSF> The current approach is to write a test case and test it with a couple of
WSF> languages... Usually Java and Python as these cover the two main internal
WSF> differences in implementation. The test case is then added into
WSF> common.mk... Nice and simple and this nearly always works for all the other
WSF> languages. With the experimental languages the chances are the test case
WSF> will fail and so loads of makefiles need modifying with the failed test
WSF> case.

This is a valid concern and I admit that I don't really know what to do
about it. Maybe we could have some symbols indicating whether the language
supports some relatively broad categories of functionalities, e.g.
LANG_HAS_DIRECTORS or LANG_HAS_OVERLOADING etc and then use those to
exclude a new test from all modules that miss support for the thing being
tested in it at once?

Otherwise I can just see checking the PR status on Travis and manually
adding the test to the XFAIL_TESTS list for all of the modules for which it
fails. This is indeed annoying, but I don't think it's an insurmountable
obstacle and, again, I don't think this annoyance outweighs the benefit of
running the test suite for e.g. C module.

WSF> Plus these languages are not properly set up in the test suite to
WSF> test easily.

This really needs to be fixed. I.e. IMO this must be a prerequisite for
accepting a module into SWIG: not only the test suite must exist, but CI
must run it. Having the tests but not running them is almost worse than not
having them in the first place, as it just creates a false feeling of
confidence.

WSF> Most patch requests will show as failed for the same reason
WSF> and we have enough problems already with false failures.

Travis is indeed flaky and this is very annoying. I don't have any
solution to this, I use Buildbot for some other projects and it's, in
general, more stable, but much less user-friendly for GitHub projects.

WSF> So I don't want to be maintaining lots of lists of failed test cases.

Maybe we could at least try it and see if it's really going to be such a
problem in practice? I don't think new tests are added that often... And
adding a build to the "allow_failure" part of the Travis config file can
be easily done at any moment.

WSF> I'd rather the experimental languages were legitimately allowed to
WSF> fail. Hence they need to be treated differently. Unless there are
WSF> other ideas to solve this?

If we do it like this, I'd like to ask for an intermediate status
status/tier for languages such as C. But this is getting too complicated
and fragile, I'd still prefer to have as many languages as possible in SWIG
itself (and run test suite on Travis for them) and drop/branch/fork all the
others.

Regards,
VZ
m***@comcast.net
2017-02-21 21:51:07 UTC
Permalink
[...]
Post by Vadim Zeitlin
Maybe we could have some symbols
indicating whether the language supports some relatively broad
categories of functionalities, e.g. LANG_HAS_DIRECTORS or
LANG_HAS_OVERLOADING etc and then use those to exclude a new
test from all modules that miss support for the thing being
tested in it at once?
Otherwise I can just see checking the PR status on Travis and
manually adding the test to the XFAIL_TESTS list ...
I like this notion of categories for the test suite. And I wonder
if it could be used to implement the idea of swig modules being
[experimental, supported, sub-standard, ..]. Do, all languages
supported by swig need to implement all of the swig functionalities?
I'd argue that a module could still be useful to someone without
having support for some LANG_HAS_XXX feature.

It seems to me that it might be possible to put all of the swig
tests into one of these functionality categories. Then a language
either fully implements *all* of the tests in that functionality
category or it does not. Thus the "rules" for a module would not have
to be tweaked with long lists of xfail exceptions and could be
consistently enforced. A module either passes all functionality tests
or it does not. If it does not then it does not implement that
functionality.

The functionality categories could then be grouped into the "tiers"
of support William started this discussion with. A fully supported
module implements all functionality categories. And a
"basic/experimental" module would have a very short list of
functionality categories that it must pass. One could still have a
strict policy for experimental modules passing all of the tests.
But the list of functionality categories would be smaller than a
fully supported module.

In years past, I implemented a new swig module for a language swig
did not support. I never even considered contributing this module to
swig as the thought of implementing support for all of the tests was
staggering as a new module developer. My module did support some
basic wrapping features and was useful to me. But I did not have the
time or experience with swig to in any way make my module fleshed out
enough to consider contributing to swig. If there were a short list of
functionalities and tests that a new/experimental module would be
expected to pass then things might have been different.

Mike
William S Fulton
2017-03-23 19:19:19 UTC
Permalink
Post by m***@comcast.net
[...]
Post by Vadim Zeitlin
Maybe we could have some symbols
indicating whether the language supports some relatively broad
categories of functionalities, e.g. LANG_HAS_DIRECTORS or
LANG_HAS_OVERLOADING etc and then use those to exclude a new
test from all modules that miss support for the thing being
tested in it at once?
Otherwise I can just see checking the PR status on Travis and
manually adding the test to the XFAIL_TESTS list ...
I like this notion of categories for the test suite. And I wonder
if it could be used to implement the idea of swig modules being
[experimental, supported, sub-standard, ..]. Do, all languages
supported by swig need to implement all of the swig functionalities?
I'd argue that a module could still be useful to someone without
having support for some LANG_HAS_XXX feature.
It seems to me that it might be possible to put all of the swig
tests into one of these functionality categories. Then a language
either fully implements *all* of the tests in that functionality
category or it does not. Thus the "rules" for a module would not have
to be tweaked with long lists of xfail exceptions and could be
consistently enforced. A module either passes all functionality tests
or it does not. If it does not then it does not implement that
functionality.
The functionality categories could then be grouped into the "tiers"
of support William started this discussion with. A fully supported
module implements all functionality categories. And a
"basic/experimental" module would have a very short list of
functionality categories that it must pass. One could still have a
strict policy for experimental modules passing all of the tests.
But the list of functionality categories would be smaller than a
fully supported module.
In years past, I implemented a new swig module for a language swig
did not support. I never even considered contributing this module to
swig as the thought of implementing support for all of the tests was
staggering as a new module developer. My module did support some
basic wrapping features and was useful to me. But I did not have the
time or experience with swig to in any way make my module fleshed out
enough to consider contributing to swig. If there were a short list of
functionalities and tests that a new/experimental module would be
expected to pass then things might have been different.
<https://lists.sourceforge.net/lists/listinfo/swig-devel>
I'm really strongly opposed to having multiple different classifications.
Easily the number of classifications will be greater than the number of
sub-standard languages that we are trying to classify, so I don't see the
point of trying to agree some overly complex set of classifications.
Documentation should instead clarify what is working/not working.

This thread has shown that there isn't going to be agreement around
everyone's goals and aspirations so after much deliberation I'm going to go
ahead with the best compromise I can think of. The overriding consideration
is to keep things simple, easy to maintain by core developers, flexible for
future development and try and encourage new developers to get involved by
centralizing as much code as possible into master and not lower standards
for the current set of mature languages.

We'll have three classifications of target languages, but you'll notice
that the Experimental classification can provide additional guarantees at
the discretion of the maintainer of the language module. The idea is if
someone wants an experimental language to have stronger guarantees, they
will have to roll up their sleeves, contribute and become a maintainer or
assist the existing maintainer. That is, a very purposeful statement to
'put up or shut up'.

1. Standard
- The entire test-suite must pass.
- Examples must be available and also run successfully.
- Fixing regressions will be given highest priority by core SWIG developers.
- Core developers will make sure that all tests are working on each release.
- Nearly all features will work with some exclusions. Directors, full
native nested class support and a few lesser known STL class wrappers are
the ones that come to mind.
- Stability and backwards compatibility will be provided for point release
(eg 4.0.x).

2. Experimental
- For target languages of sub-standard quality, failing to meet the above
'Standard' classification.
- The test-suite must be implemented and include some runtime tests for
wrapped C and C++ tests.
- Failing tests must be put into one of the FAILING_CPP_TESTS or
FAILING_C_TESTS lists in the test-suite. This will ensure the test-suite
can be superficially made to pass by ignoring failing tests.
- The test-suite will be run on Travis, but experimental languages will be
set as 'allow_failures'. This means that pull requests and normal
development commits will not break the test-suite on Travis for
experimental languages.
- Any new failed tests will be fixed on a 'best effort' basis by core
developers, but may result in the test-suite for the language failing.
- If a module has an official maintainer, then the maintainer will be
requested to focus on fixing test-suite regressions and commit to migrating
the module to become a 'Standard' module.
- If a module does not have an official maintainer, then, as maintenance
will be on a 'best efforts' basis by the core maintainers, no guarantees
will be provided from one release to the next and regressions may well
creep in.
- Experimental target languages must include an additional option:
-swigexperimental.
- Experimental target languages will have a (suppressible) warning
explaining the Experimental sub-standard status and encourage users to help
improve it (wording to be agreed upon).
- No backwards compatibility is guaranteed as the module is effectively 'in
development'. If a language module has an official maintainer, then a
backwards compatibility guarantee may be provided at the maintainer's
discretion and should be documented as such.
- I'm undecided on the following and it depends on the current status as to
whether I enforce it: Experimental languages should pass 'make
partialcheck-test-suite' without error. This is basically providing a
guarantee that the test-suite will not cause SWIG to crash or error out. It
also makes it possible to refactor code easily and include experimental
languages as a directory diff can be simply run before and after
refactoring ensuring there is no change in output.

3. Disabled
Any language currently in master without a working test-suite will be
disabled.
To be decided, but I will probably just leave these modules compiling, but
nothing from them will be installed. An error message describing the module
as deleted will appear on any attempt to use it. Anyone wishing to
resurrect it can probably do so with some minor code changes. We could
delete these language modules completely in version 4.1 if there is no
feedback noticing that they are no longer available.

I've also got a hidden agenda that I have not mentioned to date. The reason
I'd like to include as many languages as possible in the master is to make
it possible to refactor the core at some point. I'd like to slowly remove
DOH and replace it with something better. I don't believe this will work by
having modules scattered about in different forks/branches.

I've made the modifications to Ocaml for it to work in the way I envisage
an experimental language should work on Travis. It should go green soon.

Lastly, I'm going to tentatively abandon a swig-3.0.13 release as there are
no regressions reported to date and no-one is willing to work on a 4.0
branch. I suggest we only make a 3.0.13 release with just bug fixes if it
is needed before 4.0 is ready. If there are no major objections, master is
now open for 4.0 development.

Finally, whenever 4.0.0 is released, I'd like to provide bug fixes only in
4.0.x releases and switch primary development to 4.1. In other words, let's
move away from developing never-ending point releases.

William
Vadim Zeitlin
2017-03-24 01:36:23 UTC
Permalink
On Thu, 23 Mar 2017 19:19:19 +0000 William S Fulton <***@fultondesigns.co.uk> wrote:

WSF> We'll have three classifications of target languages, but you'll notice
WSF> that the Experimental classification can provide additional guarantees at
WSF> the discretion of the maintainer of the language module. The idea is if
WSF> someone wants an experimental language to have stronger guarantees, they
WSF> will have to roll up their sleeves, contribute and become a maintainer or
WSF> assist the existing maintainer. That is, a very purposeful statement to
WSF> 'put up or shut up'.

FWIW I'll definitely put up with this, although I still regret a few
choices for the experimental languages, such as...

WSF> 1. Standard
WSF> - The entire test-suite must pass.
WSF> - Examples must be available and also run successfully.
WSF> - Fixing regressions will be given highest priority by core SWIG developers.
WSF> - Core developers will make sure that all tests are working on each release.
WSF> - Nearly all features will work with some exclusions. Directors, full
WSF> native nested class support and a few lesser known STL class wrappers are
WSF> the ones that come to mind.
WSF> - Stability and backwards compatibility will be provided for point release
WSF> (eg 4.0.x).
WSF>
WSF> 2. Experimental
WSF> - For target languages of sub-standard quality, failing to meet the above
WSF> 'Standard' classification.
WSF> - The test-suite must be implemented and include some runtime tests for
WSF> wrapped C and C++ tests.
WSF> - Failing tests must be put into one of the FAILING_CPP_TESTS or
WSF> FAILING_C_TESTS lists in the test-suite. This will ensure the test-suite
WSF> can be superficially made to pass by ignoring failing tests.
WSF> - The test-suite will be run on Travis, but experimental languages will be
WSF> set as 'allow_failures'. This means that pull requests and normal
WSF> development commits will not break the test-suite on Travis for
WSF> experimental languages.

This one: with FAILING_TESTS the test suite really should pass all the
time, for me "allow_failures" is a hack used for temporary failures, it
shouldn't be used to allow test suite to fail on a permanent basis.

WSF> - Any new failed tests will be fixed on a 'best effort' basis by core
WSF> developers, but may result in the test-suite for the language failing.

Maybe I'm too optimistic, but I hope that people submitting PRs that make
one of experimental languages fail could also update FAILING_TESTS to
include the new test which results in the failure.

WSF> - If a module has an official maintainer, then the maintainer will be
WSF> requested to focus on fixing test-suite regressions and commit to migrating
WSF> the module to become a 'Standard' module.
WSF> - If a module does not have an official maintainer, then, as maintenance
WSF> will be on a 'best efforts' basis by the core maintainers, no guarantees
WSF> will be provided from one release to the next and regressions may well
WSF> creep in.
WSF> - Experimental target languages must include an additional option:
WSF> -swigexperimental.
WSF> - Experimental target languages will have a (suppressible) warning
WSF> explaining the Experimental sub-standard status and encourage users to help
WSF> improve it (wording to be agreed upon).

Both of these points still seem to be perfectly unnecessary, superfluous
and potentially harmful to me. IMNSHO it's quite enough to mention the
language experimental status in the manual. BTW, it might be a good idea to
group all experimental languages in their own part in the manual, i.e.
instead of flat manual with 40+ chapters, have a first part which would be
mostly language-independent i.e. apply to all languages; second part
covering all first-tier languages and the third one with the experimental
ones.

WSF> - No backwards compatibility is guaranteed as the module is effectively 'in
WSF> development'. If a language module has an official maintainer, then a
WSF> backwards compatibility guarantee may be provided at the maintainer's
WSF> discretion and should be documented as such.

Fair enough.

WSF> - I'm undecided on the following and it depends on the current status as to
WSF> whether I enforce it: Experimental languages should pass 'make
WSF> partialcheck-test-suite' without error. This is basically providing a
WSF> guarantee that the test-suite will not cause SWIG to crash or error out. It
WSF> also makes it possible to refactor code easily and include experimental
WSF> languages as a directory diff can be simply run before and after
WSF> refactoring ensuring there is no change in output.

Looks good too.


WSF> 3. Disabled
WSF> Any language currently in master without a working test-suite will be
WSF> disabled.

Fine.

WSF> Lastly, I'm going to tentatively abandon a swig-3.0.13 release as there are
WSF> no regressions reported to date and no-one is willing to work on a 4.0
WSF> branch. I suggest we only make a 3.0.13 release with just bug fixes if it
WSF> is needed before 4.0 is ready. If there are no major objections, master is
WSF> now open for 4.0 development.

Thanks, unfortunately I won't be able to start working on SWIG again
before at least another couple of weeks due to many other urgent things on
my plate right now. But I do still intend to do what I can, notably advance
on C backend.

WSF> Finally, whenever 4.0.0 is released, I'd like to provide bug fixes only in
WSF> 4.0.x releases and switch primary development to 4.1. In other words, let's
WSF> move away from developing never-ending point releases.

I like this too, but someone (meaning, probably you) would have to
backport bug fixes on master to the previous release branch for this to
work well for SWIG users, right?


To summarize, in spite of some disagreements, I do think this will change
things for the better, thanks a lot for thinking about and working on all
this stuff!
VZ
Kris Thielemans
2017-03-24 07:02:15 UTC
Permalink
Dear all

Sorry for doing a top-reply.

Just like to mention that I agree with Vadim's agreement and suggestions,
i.e. have 3 section in the manual, and don't use the "swigexperimental flag"
(I don't see it being useful for actual users and will actually break
compatibility with (C)Makefiles at some point when the language moves to
"standard").

One question: what do you mean with "disabled"? Removed from the source? (I
guess/hope not). Not built by default but still possible to build using the
standard system ? (I hope so)

Thanks William et al for a huge effort and Swig and this getting this
discussion started.

Kris

PS: Sadly I have no time anymore to work on Swig myself. Maybe I'll find
some time to submit a PR for CMake (the form I make ages ago was essentially
functional).


-----Original Message-----
From: Vadim Zeitlin []
Sent: 24 March 2017 01:36
To: swig-devel <swig-***@lists.sourceforge.net>
Subject: Re: [Swig-devel] Radical new approach to development and moving
towards version 3.1 or version 4.0

On Thu, 23 Mar 2017 19:19:19 +0000 William S Fulton < > wrote:

WSF> We'll have three classifications of target languages, but you'll
WSF> notice that the Experimental classification can provide additional
WSF> guarantees at the discretion of the maintainer of the language
WSF> module. The idea is if someone wants an experimental language to
WSF> have stronger guarantees, they will have to roll up their sleeves,
WSF> contribute and become a maintainer or assist the existing
WSF> maintainer. That is, a very purposeful statement to 'put up or shut
up'.

FWIW I'll definitely put up with this, although I still regret a few
choices for the experimental languages, such as...

WSF> 1. Standard
WSF> - The entire test-suite must pass.
WSF> - Examples must be available and also run successfully.
WSF> - Fixing regressions will be given highest priority by core SWIG
developers.
WSF> - Core developers will make sure that all tests are working on each
release.
WSF> - Nearly all features will work with some exclusions. Directors,
WSF> full native nested class support and a few lesser known STL class
WSF> wrappers are the ones that come to mind.
WSF> - Stability and backwards compatibility will be provided for point
WSF> release (eg 4.0.x).
WSF>
WSF> 2. Experimental
WSF> - For target languages of sub-standard quality, failing to meet the
WSF> above 'Standard' classification.
WSF> - The test-suite must be implemented and include some runtime tests
WSF> for wrapped C and C++ tests.
WSF> - Failing tests must be put into one of the FAILING_CPP_TESTS or
WSF> FAILING_C_TESTS lists in the test-suite. This will ensure the
WSF> test-suite can be superficially made to pass by ignoring failing tests.
WSF> - The test-suite will be run on Travis, but experimental languages
WSF> will be set as 'allow_failures'. This means that pull requests and
WSF> normal development commits will not break the test-suite on Travis
WSF> for experimental languages.

This one: with FAILING_TESTS the test suite really should pass all the
time, for me "allow_failures" is a hack used for temporary failures, it
shouldn't be used to allow test suite to fail on a permanent basis.

WSF> - Any new failed tests will be fixed on a 'best effort' basis by
WSF> core developers, but may result in the test-suite for the language
failing.

Maybe I'm too optimistic, but I hope that people submitting PRs that make
one of experimental languages fail could also update FAILING_TESTS to
include the new test which results in the failure.

WSF> - If a module has an official maintainer, then the maintainer will
WSF> be requested to focus on fixing test-suite regressions and commit
WSF> to migrating the module to become a 'Standard' module.
WSF> - If a module does not have an official maintainer, then, as
WSF> maintenance will be on a 'best efforts' basis by the core
WSF> maintainers, no guarantees will be provided from one release to the
WSF> next and regressions may well creep in.
WSF> - Experimental target languages must include an additional option:
WSF> -swigexperimental.
WSF> - Experimental target languages will have a (suppressible) warning
WSF> explaining the Experimental sub-standard status and encourage users
WSF> to help improve it (wording to be agreed upon).

Both of these points still seem to be perfectly unnecessary, superfluous
and potentially harmful to me. IMNSHO it's quite enough to mention the
language experimental status in the manual. BTW, it might be a good idea to
group all experimental languages in their own part in the manual, i.e.
instead of flat manual with 40+ chapters, have a first part which would be
mostly language-independent i.e. apply to all languages; second part
covering all first-tier languages and the third one with the experimental
ones.

WSF> - No backwards compatibility is guaranteed as the module is
WSF> effectively 'in development'. If a language module has an official
WSF> maintainer, then a backwards compatibility guarantee may be
WSF> provided at the maintainer's discretion and should be documented as
such.

Fair enough.

WSF> - I'm undecided on the following and it depends on the current
WSF> status as to whether I enforce it: Experimental languages should
WSF> pass 'make partialcheck-test-suite' without error. This is
WSF> basically providing a guarantee that the test-suite will not cause
WSF> SWIG to crash or error out. It also makes it possible to refactor
WSF> code easily and include experimental languages as a directory diff
WSF> can be simply run before and after refactoring ensuring there is no
change in output.

Looks good too.


WSF> 3. Disabled
WSF> Any language currently in master without a working test-suite will
WSF> be disabled.

Fine.

WSF> Lastly, I'm going to tentatively abandon a swig-3.0.13 release as
WSF> there are no regressions reported to date and no-one is willing to
WSF> work on a 4.0 branch. I suggest we only make a 3.0.13 release with
WSF> just bug fixes if it is needed before 4.0 is ready. If there are no
WSF> major objections, master is now open for 4.0 development.

Thanks, unfortunately I won't be able to start working on SWIG again before
at least another couple of weeks due to many other urgent things on my plate
right now. But I do still intend to do what I can, notably advance on C
backend.

WSF> Finally, whenever 4.0.0 is released, I'd like to provide bug fixes
WSF> only in 4.0.x releases and switch primary development to 4.1. In
WSF> other words, let's move away from developing never-ending point
releases.

I like this too, but someone (meaning, probably you) would have to backport
bug fixes on master to the previous release branch for this to work well for
SWIG users, right?


To summarize, in spite of some disagreements, I do think this will change
things for the better, thanks a lot for thinking about and working on all
this stuff!
VZ
Olly Betts
2017-03-30 00:03:38 UTC
Permalink
Post by William S Fulton
-swigexperimental.
I'm afraid I've been swamped lately and haven't had time to fully review
this discussion, and the issues raised are somewhat complex so I didn't
want to fire off a reply before I had.

However, this option seems like it's going to cause real practical
problems when a SWIG backend moves to or from experimental status, and
will only really serve to annoy users.

There seem to be two cases - either -swigexperimental is always
supported even for non-experimental languages, or you can only
pass it for an experimental language. It's unclear to me which
you are proposing, so I'll consider both.

In the first case, the best advice for users would be to always pass
-swigexperimental - then their Java bindings build system will continue
to work when SWIG 16.12.104 moves Java to experimental status. This
seems to defeat the whole motivation for the option.

The second case means that user build systems will need to behave
differently depending on the version of SWIG that the person building
the software has installed. SWIG's support for Java in 16.12.104 is
unlikely to be much different to the support in 16.12.103, it's just
an admission that the backend is in need of love. Yet the user needs
to add a version check on SWIG and hard code the knowledge that
16.12.104 was the SWIG version changed, or else probe for whether
-swigexperimental is required.

Either way, we're really just creating pointless work for users to
automate away the annoyance that -swigexperimental creates. It
won't achieve your aim of flagging the situation any better than a
simple warning message would be. And it'll create a natural resistance
amongst SWIG devs to moving languages to experimental status until they
are basically unusable for anything (which is really too late).

These issues will be annoying for distros with binary packages as it
can cause a simple rebuild of a reverse dependency of SWIG to fail. I
suspect we would see distros patching their SWIG packages to make the
experimental check a warning and -swigexperimental a no-op.

Let's not go there - a clear warning message about experimental status
is sufficient.

Cheers,
Olly
William S Fulton
2017-04-24 18:40:29 UTC
Permalink
Post by Kris Thielemans
Post by William S Fulton
-swigexperimental.
Okay reading all the opposition to this, I'll go with just the warning
message and drop the -swigexperimental flag.

I've bumped the version on master 4.0 now and moved the 3.1 wiki page to
https://github.com/swig/swig/wiki/SWIG-4.0-Development.
Post by Kris Thielemans
One question: what do you mean with "disabled"? Removed from the source?
(I
Post by Kris Thielemans
guess/hope not). Not built by default but still possible to build using
the
Post by Kris Thielemans
standard system ? (I hope so)
For the 'Disabled' languages, I'll leave them in the source tree for 4.0.
I'll probably leave them compiling, but they will not be available at
runtime. If no-one is prepared to bring them up to at least 'Experimental'
status, I'll delete the source in 4.1.
Post by Kris Thielemans
PS: Sadly I have no time anymore to work on Swig myself. Maybe I'll find
some time to submit a PR for CMake (the form I make ages ago was
essentially
Post by Kris Thielemans
functional).
Shame, from what I recall the CMake patches didn't need much more work to
not be an additional maintenance burden.

William

Rob McDonald
2017-02-15 17:33:36 UTC
Permalink
SWIG-User here (not developer) --

Thanks again to the whole development team for producing such a useful tool.

I understand the desire to clearly identify two classes of languages
support and I've read through the chain with Vladim.

With some fear of bikesheding, I would suggest the categories
'supported' and 'experimental'. I think those terms clearly get the
point across without being overtly hostile.


<tangent>
I often describe my own open source project with a Venn diagram. One
set for users, another set for potential developers. I have observed
that cultivating User-Developer's is the key to getting help on the
project. Unfortunately, in my case, the overlap space of the two sets
is exceptionally small.

You can imagine some projects (a C compiler written in C) where the
overlap is nearly complete. Such a project still must compel users to
contribute -- but at least they have the fundamental skills.

SWIG lies in an intermediate realm. All the users are programmers,
but they may not all be up for the rigors of the programming internal
to SWIG.

This situation may create an environment that encourages partial
language support. Someone versed in language X comes along, finds
SWIG, and attempts an implementation. They run out of time, skill, or
need and cease development.
</tangent>


Often the social challenges are tougher than the technical ones. Best
of luck -- and thanks again,

Rob

On Fri, Feb 10, 2017 at 10:49 AM, William S Fulton
Post by William S Fulton
I would like developers to focus on releasing swig-3.1 in the near future
and propose some radical changes for this release and future development.
Usually we try not to break backwards compatibility in our point releases,
3.0.x. Version 3.1 is an opportunity to clear out some old cruft and drop
support for older target languages. Some riskier changes I've resisted
pulling into 3.0 can also be merged into 3.1, most notably the doxygen work.
There is a wiki page at
https://github.com/swig/swig/wiki/SWIG-3.1-Development containing the
aspirations of what should go into 3.1. I suggest we keep this up to date as
progress is made.
I have also sensed some frustration that some of the half finished work
never makes it into the mainline. I've resisted pulling in some of these
branches as the quality is sub-standard and not something that we can
support given how few developers there are. The reality is some of the
target languages are completely sub-standard too, yet they already exist in
the released versions and are not removed. I wonder if we should take a new
approach going forwards and propose a single code base but classify target
languages by one of two qualities.
1) First-class
These would be the target languages that are recognised as good quality and
fully functional. The classification needs to be backed up by a test-suite
that works and passes in Travis. Not all features will necessarily be
available though, eg director or nspace support might be missing.
2) Sub-standard
Any language not meeting the good quality label would fall into this
bracket.
I feel that it must be clear that a sub-standard language module is not up
to the expected quality and anyone choosing to use it should not be allowed
to file bug reports unless accompanied by patches to fix. This way
expectations are set and these language modules are made available 'as is'
and encouragement is made to help improve them if anyone wants them.
To this end, I propose any sub-standard module will require an additional
-sub-standard-module-which-can-only-be-used-if-I-help-fix-problems. This
will also issue a wordy warning explaining exactly what this means to set
expectations. The flag should also how make it clear how to get involved in
order to move it into the first-class category.
This way we don't compromise on the quality of what we currently have and we
make neglected code more easily available and hopefuly encourage new
developer participation. Thoughts?
I would like to start the 3.1 branch by merging in the doxygen branch.
Vadim, any reason not to do this now? Going forwards, I'll merge master
regularly to the 3.1 branch and suggest we have Travis testing on both
master and the 3.1 branch. Unless there is a lot of developer participation
on the 3.1 branch, I expect it will take a few months to get ready in which
case we may need another one or two 3.0.x releases.
If we do this, we could emphasise the change in approach by calling it
version 4.0 instead.
William
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Swig-devel mailing list
https://lists.sourceforge.net/lists/listinfo/swig-devel
Joel Andersson
2017-02-16 15:05:46 UTC
Permalink
Hi all,

I think that this is a good way forward. By allowing a module to have
"experimental' status in SWIG, you have implicitly endorsed this particular
effort and encouraged others to contribute to it. There can be multiple
efforts to implement a particular language module (this has been the case
for MATLAB), but until it's in master, there is no obviously "preferred"
one.

The alternative is that these language modules never merge with master.
From my side, I would very much like to see the MATLAB module (actually
MATLAB/Octave bilingual module) merged with master, be it marked
"experimental" or "beta" or whatever.

Joel
--
Joel Andersson, PhD
1415 Engineering Drive
Engineering Hall, room 2009
Madison, WI 53706, USA
Phone: +1 608 421 4553 (Swedish number: +46 70 736 05 12)
Private address: 122 E Gilman St, Apt 408, Madison, WI 53703, USA
I would like developers to focus on releasing swig-3.1 in the near future
and propose some radical changes for this release and future development.
Usually we try not to break backwards compatibility in our point releases,
3.0.x. Version 3.1 is an opportunity to clear out some old cruft and drop
support for older target languages. Some riskier changes I've resisted
pulling into 3.0 can also be merged into 3.1, most notably the doxygen work.
There is a wiki page at https://github.com/swig/swig/
wiki/SWIG-3.1-Development containing the aspirations of what should go
into 3.1. I suggest we keep this up to date as progress is made.
I have also sensed some frustration that some of the half finished work
never makes it into the mainline. I've resisted pulling in some of these
branches as the quality is sub-standard and not something that we can
support given how few developers there are. The reality is some of the
target languages are completely sub-standard too, yet they already exist in
the released versions and are not removed. I wonder if we should take a new
approach going forwards and propose a single code base but classify target
languages by one of two qualities.
1) First-class
These would be the target languages that are recognised as good quality
and fully functional. The classification needs to be backed up by a
test-suite that works and passes in Travis. Not all features will
necessarily be available though, eg director or nspace support might be
missing.
2) Sub-standard
Any language not meeting the good quality label would fall into this
bracket.
I feel that it must be clear that a sub-standard language module is not up
to the expected quality and anyone choosing to use it should not be allowed
to file bug reports unless accompanied by patches to fix. This way
expectations are set and these language modules are made available 'as is'
and encouragement is made to help improve them if anyone wants them.
To this end, I propose any sub-standard module will require an additional
SWIG flag to make this clear. Something like: -sub-standard-module-which-
can-only-be-used-if-I-help-fix-problems. This will also issue a wordy
warning explaining exactly what this means to set expectations. The flag
should also how make it clear how to get involved in order to move it into
the first-class category.
This way we don't compromise on the quality of what we currently have and
we make neglected code more easily available and hopefuly encourage new
developer participation. Thoughts?
I would like to start the 3.1 branch by merging in the doxygen branch.
Vadim, any reason not to do this now? Going forwards, I'll merge master
regularly to the 3.1 branch and suggest we have Travis testing on both
master and the 3.1 branch. Unless there is a lot of developer participation
on the 3.1 branch, I expect it will take a few months to get ready in which
case we may need another one or two 3.0.x releases.
If we do this, we could emphasise the change in approach by calling it
version 4.0 instead.
William
------------------------------------------------------------
------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Swig-devel mailing list
https://lists.sourceforge.net/lists/listinfo/swig-devel
Continue reading on narkive:
Loading...