pith. machine review for the scientific record. sign in

arxiv: 2605.09518 · v1 · submitted 2026-05-10 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

LLM-Driven Performance-Space Augmentation for Meta-Learning-Based Algorithm Selection

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:35 UTC · model grok-4.3

classification 💻 cs.LG
keywords meta-learningalgorithm selectionLLM augmentationsynthetic datasetsperformance spacelandmarkersUCI regression
0
0 comments X

The pith

LLM-generated synthetic datasets distributed uniformly across a low-dimensional performance space improve meta-learning for algorithm selection.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Meta-learning for algorithm selection depends on a meta-dataset that pairs dataset descriptions with labels for which algorithm performs best, but the supply of real curated datasets is small and leaves the meta-learner under-trained. The paper generates hundreds of synthetic regression datasets with a large language model and positions each one in a two-dimensional space whose axes are the cross-validated R-squared scores of two simple anchor algorithms. When these synthetic points are spread evenly rather than clustered near decision boundaries, the resulting meta-learner shows clear gains on real UCI tasks under both regression and multi-label evaluation. The authors interpret the gains as evidence that algorithm performance lies on a low-dimensional manifold whose coverage can be completed more effectively by uniform placement than by targeted sampling.

Core claim

Steering an LLM to produce synthetic regression datasets that maximise uniform coverage of the two-dimensional performance space spanned by the R-squared scores of two landmarkers reduces reconstruction bias in the meta-dataset, yielding a 17.47 percent relative reduction in Hamming loss, a 100.41 percent relative gain in subset accuracy, and a 6.09 percent relative improvement in pooled out-of-fold R-squared compared with the unaugmented baseline and outperforming margin-based placement.

What carries the argument

The two-dimensional performance space whose coordinates are the cross-validated R-squared values of two fixed anchor algorithms (landmarkers), used to guide LLM generation of synthetic datasets for uniform epsilon-cover augmentation of the meta-dataset.

If this is right

  • Uniform sampling of synthetic datasets across the performance space produces a 17.47 percent relative reduction in Hamming loss for the multi-label formulation.
  • The same uniform strategy yields a 100.41 percent relative increase in subset accuracy.
  • Pooled out-of-fold R-squared rises by 6.09 percent relative to the unaugmented case.
  • Margin-based placement near decision boundaries improves results over the baseline yet remains inferior to uniform placement.
  • The performance of algorithms can be treated as residing on a low-dimensional manifold that uniform epsilon-cover minimises bias in reconstructing.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the low-dimensional manifold structure is general, the number of real datasets required for effective meta-learning could be reduced by careful synthetic filling.
  • The same steering method could be tested on classification tasks once suitable landmarkers are chosen for that setting.
  • Increasing the number of landmarkers beyond two might be examined to see whether coverage improves without raising the dimensionality of the space.

Load-bearing premise

The LLM can be directed to create synthetic datasets whose performance on the two landmarkers matches the statistical distribution of real-world datasets without introducing systematic biases or artifacts.

What would settle it

Augmenting the meta-dataset with the LLM synthetics and then measuring meta-learner performance on a fresh collection of real datasets yields no improvement over the unaugmented baseline, or the landmarker scores of the synthetics fail to overlap the spread seen in real data.

Figures

Figures reproduced from arXiv: 2605.09518 by Daren Ler, Darren Zhu.

Figure 1
Figure 1. Figure 1: Distribution of per-instance selection frequencies across [PITH_FULL_IMAGE:figures/full_fig_p011_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Convergence of running selection frequencies over [PITH_FULL_IMAGE:figures/full_fig_p012_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Binned mean selection frequency as a function of perpendicular distance to the [PITH_FULL_IMAGE:figures/full_fig_p013_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Normalised distribution of perpendicular distance to the [PITH_FULL_IMAGE:figures/full_fig_p013_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Density-normalised histograms of the 12 meta-features for the uniform (blue) and margin-based (orange) [PITH_FULL_IMAGE:figures/full_fig_p015_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Uniform (blue) and margin-based (orange) canonical instance sets in [PITH_FULL_IMAGE:figures/full_fig_p016_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Multi-label classification learning curves as a function of the number of synthetic samples per training fold. [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Regression learning curve for pooled OOF [PITH_FULL_IMAGE:figures/full_fig_p019_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: H2 paired t-test p-values for Hamming loss as a function of the number of synthetic samples per training fold. The dashed line at α = 0.05 marks the significance threshold; the sign of the mean difference (uniform − margin) indicates the direction of the effect. Subset accuracy learning curve. The subset accuracy curve (Figure 7b) exhibits a qualitatively different and non￾monotonic pattern. Uniform augmen… view at source ↗
Figure 10
Figure 10. Figure 10: H2 paired t-test p-values for subset accuracy as a function of the number of synthetic samples per training fold. The dashed line at α = 0.05 marks the significance threshold; the sign of the mean difference (uniform − margin) indicates the direction of the effect. Regression learning curve. The pooled OOF R2 curve ( [PITH_FULL_IMAGE:figures/full_fig_p020_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: H2 paired t-test p-values for pooled OOF R2 as a function of the number of synthetic samples per training fold. The dashed line at α = 0.05 marks the significance threshold; the sign of the mean difference (uniform − margin) indicates the direction of the effect. Margin-based augmentation matches or overtakes uniform augmentation only in narrow windows. More specifically, for Hamming loss significant reve… view at source ↗
Figure 12
Figure 12. Figure 12: Histograms of selected meta-features computed on the 42 real UCI regression datasets and on the 484 [PITH_FULL_IMAGE:figures/full_fig_p037_12.png] view at source ↗
read the original abstract

Meta-learning for algorithm selection relies on a meta-dataset in which each row corresponds to a supervised learning dataset described by meta-features and labelled with a target value that is associated with algorithm choice (typically, some function of algorithm performance). A persistent limitation is that the number of curated real-world datasets is small, resulting in sparse meta-datasets that constrain meta-learner generalisation. In this paper, we address this problem by augmenting the meta-dataset with synthetic regression datasets produced via a large language model (LLM), with generation steered toward target regions of a low-dimensionality performance space. In our experiments, we adopt a two-dimensional geometric setting defined by the cross-validated $R^2$ scores of two anchor algorithms, known as landmarkers. We compare two augmentation strategies: (1) uniform sampling, which distributes synthetic datasets across the performance space; and (2) margin-based sampling, which concentrates them near the decision boundary where landmarker preference is most ambiguous. Across 42 real-world UCI regression datasets and 730 synthetic datasets, both strategies substantially improve meta-learner performance over the unaugmented baseline under regression and multi-label evaluation formulations. However, uniform augmentation consistently outperforms margin-based augmentation, achieving a 17.47% relative reduction in Hamming loss, a 100.41% relative improvement in subset accuracy, and a +6.09% relative gain in pooled out-of-fold $R^2$. These results lead us to postulate a central thesis: the performance of algorithms resides on a low-dimensional performance manifold, whose reconstruction bias may be minimised by user-guided LLMs that seek to maximise uniform $\epsilon$-cover, and consequently, lead to improved meta-learning for algorithm selection.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that augmenting sparse meta-datasets for algorithm selection with 730 LLM-generated synthetic regression datasets, steered via two landmarkers' cross-validated R² scores into a 2D performance space, improves meta-learner performance over an unaugmented baseline on 42 real UCI datasets. Uniform sampling across the space outperforms margin-based sampling near decision boundaries, yielding 17.47% relative Hamming loss reduction, 100.41% subset accuracy gain, and +6.09% pooled out-of-fold R²; this leads to a post-hoc thesis that algorithm performances lie on a low-dimensional manifold whose reconstruction bias is minimized by uniform ε-cover via user-guided LLMs.

Significance. If the empirical gains hold after validation, the work offers a practical advance in meta-learning for algorithm selection by mitigating data scarcity through targeted LLM augmentation in performance space rather than feature space. Credit is due for the scale of evaluation (42 real + 730 synthetic datasets), the direct comparison of uniform vs. margin strategies, and the reproducible framing of the augmentation as conditioning on landmarker R² targets. The approach could generalize to other meta-learning tasks if the synthetic data realism is confirmed.

major comments (3)
  1. [Experimental evaluation and results] The central empirical claim of improved meta-learner performance rests on synthetic datasets whose generation is conditioned only on the two landmarkers' R² values; no check is reported that the joint performance vectors (or higher-order correlations) across the remaining algorithms match the real-data distribution. This is load-bearing because the meta-labels and meta-learner training use the full algorithm performance matrix, so any LLM-induced distortion in non-landmarker performances could produce the observed gains in R², Hamming loss, and subset accuracy as an artifact rather than evidence of manifold reconstruction.
  2. [Results (quantitative comparisons)] The reported relative improvements (17.47% Hamming loss reduction, 100.41% subset accuracy improvement, +6.09% pooled R²) are given without accompanying statistical tests, confidence intervals, or details on the number of meta-learner runs and variance estimation. This weakens the ability to conclude that uniform augmentation is reliably superior and that the gains exceed what would be expected from random augmentation or label-distribution shifts.
  3. [Conclusion and discussion] The low-dimensional performance manifold thesis is presented as a concluding postulate without any direct test (e.g., intrinsic dimensionality estimation, manifold reconstruction error on held-out real data, or comparison against alternative explanations such as simple density filling). Because the thesis is used to interpret why uniform sampling succeeds, its lack of supporting analysis makes the explanatory claim speculative rather than demonstrated.
minor comments (2)
  1. [Methodology] The manuscript should provide the exact LLM prompts, temperature settings, and any post-generation filtering used to steer datasets toward target (R²_landmarker1, R²_landmarker2) regions, as these details are essential for reproducibility.
  2. [Meta-learning formulation] The list of meta-features employed by the meta-learner is not enumerated; adding this (or a reference to a standard set) would clarify the experimental setup.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their insightful comments, which have helped us identify areas for improvement in our manuscript. We address each major comment below and outline the revisions we will make.

read point-by-point responses
  1. Referee: [Experimental evaluation and results] The central empirical claim of improved meta-learner performance rests on synthetic datasets whose generation is conditioned only on the two landmarkers' R² values; no check is reported that the joint performance vectors (or higher-order correlations) across the remaining algorithms match the real-data distribution. This is load-bearing because the meta-labels and meta-learner training use the full algorithm performance matrix, so any LLM-induced distortion in non-landmarker performances could produce the observed gains in R², Hamming loss, and subset accuracy as an artifact rather than evidence of manifold reconstruction.

    Authors: We agree that this is an important validation step that was omitted in the original submission. To address this, we will add a new subsection in the experimental evaluation that compares the performance distributions. Specifically, we will compute the pairwise correlations between all algorithms' R² scores on the real datasets and on the synthetic ones, and report the mean absolute difference in these correlations. Additionally, we will visualize the distribution of performances for a few non-landmarker algorithms to show alignment. This analysis will confirm that the synthetic data preserves the necessary correlations, supporting that the improvements are due to better coverage of the performance space rather than artifacts. revision: yes

  2. Referee: [Results (quantitative comparisons)] The reported relative improvements (17.47% Hamming loss reduction, 100.41% subset accuracy improvement, +6.09% pooled R²) are given without accompanying statistical tests, confidence intervals, or details on the number of meta-learner runs and variance estimation. This weakens the ability to conclude that uniform augmentation is reliably superior and that the gains exceed what would be expected from random augmentation or label-distribution shifts.

    Authors: We appreciate this point and will strengthen the results section accordingly. In the revised manuscript, we will specify that all experiments were run with 5 different random seeds for the meta-learner training and report the mean and standard deviation for each metric. We will also include 95% confidence intervals computed via bootstrapping over the 42 datasets. Furthermore, we will add a comparison against a random augmentation baseline (sampling synthetic datasets without performance-space steering) to demonstrate that the gains are not merely from increasing dataset size or shifting label distributions. Paired statistical tests (Wilcoxon signed-rank) will be reported to assess significance. revision: yes

  3. Referee: [Conclusion and discussion] The low-dimensional performance manifold thesis is presented as a concluding postulate without any direct test (e.g., intrinsic dimensionality estimation, manifold reconstruction error on held-out real data, or comparison against alternative explanations such as simple density filling). Because the thesis is used to interpret why uniform sampling succeeds, its lack of supporting analysis makes the explanatory claim speculative rather than demonstrated.

    Authors: The thesis is indeed offered as an interpretive postulate based on the counterintuitive result that uniform sampling outperforms margin-based sampling. We do not claim to have performed a direct manifold analysis in the current work. In the revision, we will reframe this as a hypothesis motivated by the results and expand the discussion to include potential alternative explanations, such as the benefits of uniform coverage in reducing bias in meta-feature space. We will also outline how future work could test the manifold hypothesis using techniques like PCA on the algorithm performance matrix or estimating intrinsic dimensionality. This keeps the claim appropriately qualified while preserving the empirical contributions. revision: partial

Circularity Check

0 steps flagged

No circularity: empirical results on held-out data with no self-referential derivation

full rationale

The paper reports an empirical augmentation experiment: LLM-generated synthetic datasets are steered only by two landmarker R² values, added to a meta-dataset, and evaluated via regression and multi-label metrics on 42 held-out real UCI datasets. All reported gains (Hamming loss, subset accuracy, pooled R²) are measured against an unaugmented baseline on real data. The central thesis about a low-dimensional performance manifold is explicitly postulated from these observed improvements rather than derived from any equation or prior self-citation. No load-bearing step reduces a prediction to a fitted parameter, renames a known result, or imports uniqueness via self-citation. The work is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 1 invented entities

The approach rests on the domain assumption that algorithm performance can be captured in a 2D space from two landmarkers and on the ad-hoc assumption that LLM generation can be steered without bias. The number of synthetic datasets and choice of landmarkers are free parameters.

free parameters (2)
  • choice of two landmark algorithms
    Defines the axes of the performance space; their selection is a modeling choice not derived from data.
  • number of synthetic datasets (730)
    Scale of augmentation is chosen by the authors and affects the reported gains.
axioms (2)
  • domain assumption Algorithm performance on regression tasks can be meaningfully projected onto a low-dimensional space spanned by a small number of landmark algorithms.
    Invoked to justify the 2D geometric setting and the central thesis.
  • ad hoc to paper LLM-generated synthetic datasets can be conditioned on target performance regions without introducing non-representative artifacts.
    Required for both uniform and margin-based strategies to be valid.
invented entities (1)
  • low-dimensional performance manifold no independent evidence
    purpose: Postulated to explain why uniform coverage reduces reconstruction bias and improves meta-learning.
    Introduced in the final thesis; no independent evidence or falsifiable prediction is supplied in the abstract.

pith-pipeline@v0.9.0 · 5607 in / 1599 out tokens · 74875 ms · 2026-05-12T03:35:41.384481+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages

  1. [1]

    Machine Learning , volume=

    Data complexity meta-features for regression problems , author=. Machine Learning , volume=. 2018 , publisher=

  2. [2]

    ACM Transactions on Knowledge Discovery from Data (TKDD) , volume=

    A generic multilabel learning-based classification algorithm recommendation method , author=. ACM Transactions on Knowledge Discovery from Data (TKDD) , volume=. 2014 , publisher=

  3. [3]

    IEEE Access , volume=

    A literature survey and empirical study of meta-learning for classifier selection , author=. IEEE Access , volume=. 2020 , publisher=

  4. [4]

    Pacific-Asia Conference on Knowledge Discovery and Data Mining , pages=

    UCI++: Improved support for algorithm selection using datasetoids , author=. Pacific-Asia Conference on Knowledge Discovery and Data Mining , pages=. 2009 , organization=

  5. [5]

    KI-2012: Poster and Demo Track , pages=

    Dataset generation for meta-learning , author=. KI-2012: Poster and Demo Track , pages=. 2012 , publisher=

  6. [6]

    arXiv preprint arXiv:2601.17717 , year=

    The LLM Data Auditor: A Metric-oriented Survey on Quality and Trustworthiness in Evaluating Synthetic Data , author=. arXiv preprint arXiv:2601.17717 , year=

  7. [7]

    ACM Computing Surveys (CSUR) , volume=

    Cross-disciplinary perspectives on meta-learning for algorithm selection , author=. ACM Computing Surveys (CSUR) , volume=. 2009 , publisher=

  8. [8]

    arXiv preprint arXiv:2507.15132 , year=

    Transforming Datasets to Requested Complexity with Projection-based Many-Objective Genetic Algorithm , author=. arXiv preprint arXiv:2507.15132 , year=

  9. [9]

    Advances in computers , volume=

    The algorithm selection problem , author=. Advances in computers , volume=. 1976 , publisher=

  10. [10]

    arXiv preprint arXiv:2312.12112 , year=

    Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes , author=. arXiv preprint arXiv:2312.12112 , year=

  11. [11]

    Advances in Neural Information Processing Systems , volume=

    Epic: Effective prompting for imbalanced-class data synthesis in tabular data classification via large language models , author=. Advances in Neural Information Processing Systems , volume=

  12. [12]

    The Twelfth International Conference on Learning Representations , year=

    Language-interfaced tabular oversampling via progressive imputation and self-authentication , author=. The Twelfth International Conference on Learning Representations , year=

  13. [13]

    Findings of the Association for Computational Linguistics: ACL 2025 , pages=

    Tabgen-icl: Residual-aware in-context example selection for tabular data generation , author=. Findings of the Association for Computational Linguistics: ACL 2025 , pages=

  14. [14]

    arXiv preprint arXiv:2210.06280 , year=

    Language models are realistic tabular data generators , author=. arXiv preprint arXiv:2210.06280 , year=

  15. [15]

    Proceedings of the 31st International Conference on Computational Linguistics , pages=

    Aigt: Ai generative table based on prompt , author=. Proceedings of the 31st International Conference on Computational Linguistics , pages=

  16. [16]

    Neurocomputing , volume=

    problexity—An open-source Python library for supervised learning problem complexity assessment , author=. Neurocomputing , volume=. 2023 , publisher=

  17. [17]

    and Varoquaux, G

    Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E. , journal=. Scikit-learn: Machine Learning in

  18. [18]

    Akiba, Takuya and Sano, Shotaro and Yanase, Toshihiko and Ohta, Takeru and Koyama, Masanori , booktitle=

  19. [19]

    Mathematics , volume=

    Impact of regressand stratification in dataset shift caused by cross-validation , author=. Mathematics , volume=. 2022 , publisher=

  20. [20]

    Markelle Kelly and Rachel Longjohn and Kolby Nottingham , title =

  21. [21]

    European conference on machine learning , pages=

    Characterizing the applicability of classification algorithms using meta-level learning , author=. European conference on machine learning , pages=. 1994 , organization=

  22. [22]

    2008 Seventh International Conference on Machine Learning and Applications , pages=

    Predicting algorithm accuracy with a small set of effective meta-features , author=. 2008 Seventh International Conference on Machine Learning and Applications , pages=. 2008 , organization=

  23. [23]

    IEEE transactions on pattern analysis and machine intelligence , volume=

    Complexity measures of supervised classification problems , author=. IEEE transactions on pattern analysis and machine intelligence , volume=. 2002 , publisher=

  24. [24]

    European Conference on Machine Learning , pages=

    God doesn't always shave with Occam's razor—Learning when and how to prune , author=. European Conference on Machine Learning , pages=. 1998 , organization=

  25. [25]

    European Conference on Principles of Data Mining and Knowledge Discovery , pages=

    Discovering task neighbourhoods through landmark learning performances , author=. European Conference on Principles of Data Mining and Knowledge Discovery , pages=. 2000 , organization=

  26. [26]

    , author=

    Meta-Learning by Landmarking Various Learning Algorithms. , author=. ICML , pages=

  27. [27]

    Journal of Machine Learning Research , volume=

    Distances between data sets based on summary statistics , author=. Journal of Machine Learning Research , volume=

  28. [28]

    Pattern recognition , volume=

    Automatic recommendation of classification algorithms based on data set characteristics , author=. Pattern recognition , volume=. 2012 , publisher=

  29. [29]

    Applied Soft Computing , volume=

    On learning algorithm selection for classification , author=. Applied Soft Computing , volume=. 2006 , publisher=

  30. [30]

    Data Mining and Knowledge Discovery , volume=

    Ensembles of label noise filters: a ranking approach , author=. Data Mining and Knowledge Discovery , volume=. 2016 , publisher=

  31. [31]

    Pattern Analysis and Applications , volume=

    Automatic classifier selection for non-experts , author=. Pattern Analysis and Applications , volume=. 2014 , publisher=

  32. [32]

    European Conference on Machine Learning , pages=

    Estimating the predictive accuracy of a classifier , author=. European Conference on Machine Learning , pages=. 2001 , organization=

  33. [33]

    Machine Learning , volume=

    Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results , author=. Machine Learning , volume=. 2003 , publisher=

  34. [34]

    Journal of Artificial Intelligence Research , volume=

    A feature subset selection algorithm automatic recommendation method , author=. Journal of Artificial Intelligence Research , volume=

  35. [35]

    Applied Mathematics and Computation , volume=

    An instance-based learning recommendation algorithm of imbalance handling methods , author=. Applied Mathematics and Computation , volume=. 2019 , publisher=

  36. [36]

    Applied Intelligence , volume=

    An improved data characterization method and its application in classification algorithm recommendation , author=. Applied Intelligence , volume=. 2015 , publisher=

  37. [37]

    Knowledge-Based Systems , volume=

    A new classification algorithm recommendation method based on link prediction , author=. Knowledge-Based Systems , volume=. 2018 , publisher=

  38. [38]

    Proceedings of the 2006 ACM symposium on Applied computing , pages=

    Selecting parameters of SVM using meta-learning and kernel matrix-based meta-features , author=. Proceedings of the 2006 ACM symposium on Applied computing , pages=

  39. [39]

    Advances in neural information processing systems , volume=

    On kernel-target alignment , author=. Advances in neural information processing systems , volume=

  40. [40]

    Neurocomputing , volume=

    Combining meta-learning and search techniques to select parameters for support vector machines , author=. Neurocomputing , volume=. 2012 , publisher=

  41. [41]

    2002 , publisher=

    Exploiting sampling and meta-learning for parameter setting for support vector machines , author=. 2002 , publisher=

  42. [42]

    Machine learning , volume=

    A meta-learning method to select the kernel width in support vector regression , author=. Machine learning , volume=. 2004 , publisher=

  43. [43]

    ECE Technical Reports , pages=

    A study of meta learning for regression , author=. ECE Technical Reports , pages=

  44. [44]

    Australasian Joint Conference on Artificial Intelligence , pages=

    A landmarker selection algorithm based on correlation and efficiency criteria , author=. Australasian Joint Conference on Artificial Intelligence , pages=. 2004 , organization=

  45. [45]

    In Automated machine learning: methods, systems, challenges , pages=

    Meta-learning , author=. In Automated machine learning: methods, systems, challenges , pages=

  46. [46]

    Neural Computation , volume=

    The lack of a priori distinctions between learning algorithms , author=. Neural Computation , volume=

  47. [47]

    Neural Computation , volume=

    The existence of a priori distinctions between learning algorithms , author=. Neural Computation , volume=