当前位置:首页 > 代码 > 正文

h5轮播代码(移动端轮播图代码)

admin 发布:2022-12-19 20:02 146


今天给各位分享支持向量机回归代码的知识,其中也会对支持向量机回归代码实现进行解释,如果能碰巧解决你现在面临的问题,别忘了关注本站,现在开始吧!

本文目录一览:

csv怎么支持向量机回归

用crv对基本处理的数据进行特征分析来实现向量回归。

因为数据和特征决定了机器学习的上限,而模型和算法只是逼近这个上限而已。为了模型有更好的结果,特征分析和提取是非常重要的。导入所有的库数据进行特征分析,得到的图案进行向量研究从而实现向量机回归。

支持向量机和支持向量回归是目前机器学习领域用得较多的方法,不管是人脸识别,字符识别,行为识别,姿态识别等,都可以看到它们的影子。支持向量机回归SVR是支持向量机在回归问题上的应用模型。支持向量机回归模型基于不同的损失函数产生了很多变种。

支持向量机的matlab代码

如果是7.0以上版本

edit svmtrain

edit svmclassify

edit svmpredict

function [svm_struct, svIndex] = svmtrain(training, groupnames, varargin)

%SVMTRAIN trains a support vector machine classifier

%

% SVMStruct = SVMTRAIN(TRAINING,GROUP) trains a support vector machine

% classifier using data TRAINING taken from two groups given by GROUP.

% SVMStruct contains information about the trained classifier that is

% used by SVMCLASSIFY for classification. GROUP is a column vector of

% values of the same length as TRAINING that defines two groups. Each

% element of GROUP specifies the group the corresponding row of TRAINING

% belongs to. GROUP can be a numeric vector, a string array, or a cell

% array of strings. SVMTRAIN treats NaNs or empty strings in GROUP as

% missing values and ignores the corresponding rows of TRAINING.

%

% SVMTRAIN(...,'KERNEL_FUNCTION',KFUN) allows you to specify the kernel

% function KFUN used to map the training data into kernel space. The

% default kernel function is the dot product. KFUN can be one of the

% following strings or a function handle:

%

% 'linear' Linear kernel or dot product

% 'quadratic' Quadratic kernel

% 'polynomial' Polynomial kernel (default order 3)

% 'rbf' Gaussian Radial Basis Function kernel

% 'mlp' Multilayer Perceptron kernel (default scale 1)

% function A kernel function specified using @,

% for example @KFUN, or an anonymous function

%

% A kernel function must be of the form

%

% function K = KFUN(U, V)

%

% The returned value, K, is a matrix of size M-by-N, where U and V have M

% and N rows respectively. If KFUN is parameterized, you can use

% anonymous functions to capture the problem-dependent parameters. For

% example, suppose that your kernel function is

%

% function k = kfun(u,v,p1,p2)

% k = tanh(p1*(u*v')+p2);

%

% You can set values for p1 and p2 and then use an anonymous function:

% @(u,v) kfun(u,v,p1,p2).

%

% SVMTRAIN(...,'POLYORDER',ORDER) allows you to specify the order of a

% polynomial kernel. The default order is 3.

%

% SVMTRAIN(...,'MLP_PARAMS',[P1 P2]) allows you to specify the

% parameters of the Multilayer Perceptron (mlp) kernel. The mlp kernel

% requires two parameters, P1 and P2, where K = tanh(P1*U*V' + P2) and P1

% 0 and P2 0. Default values are P1 = 1 and P2 = -1.

%

% SVMTRAIN(...,'METHOD',METHOD) allows you to specify the method used

% to find the separating hyperplane. Options are

%

% 'QP' Use quadratic programming (requires the Optimization Toolbox)

% 'LS' Use least-squares method

%

% If you have the Optimization Toolbox, then the QP method is the default

% method. If not, the only available method is LS.

%

% SVMTRAIN(...,'QUADPROG_OPTS',OPTIONS) allows you to pass an OPTIONS

% structure created using OPTIMSET to the QUADPROG function when using

% the 'QP' method. See help optimset for more details.

%

% SVMTRAIN(...,'SHOWPLOT',true), when used with two-dimensional data,

% creates a plot of the grouped data and plots the separating line for

% the classifier.

%

% Example:

% % Load the data and select features for classification

% load fisheriris

% data = [meas(:,1), meas(:,2)];

% % Extract the Setosa class

% groups = ismember(species,'setosa');

% % Randomly select training and test sets

% [train, test] = crossvalind('holdOut',groups);

% cp = classperf(groups);

% % Use a linear support vector machine classifier

% svmStruct = svmtrain(data(train,:),groups(train),'showplot',true);

% classes = svmclassify(svmStruct,data(test,:),'showplot',true);

% % See how well the classifier performed

% classperf(cp,classes,test);

% cp.CorrectRate

%

% See also CLASSIFY, KNNCLASSIFY, QUADPROG, SVMCLASSIFY.

% Copyright 2004 The MathWorks, Inc.

% $Revision: 1.1.12.1 $ $Date: 2004/12/24 20:43:35 $

% References:

% [1] Kecman, V, Learning and Soft Computing,

% MIT Press, Cambridge, MA. 2001.

% [2] Suykens, J.A.K., Van Gestel, T., De Brabanter, J., De Moor, B.,

% Vandewalle, J., Least Squares Support Vector Machines,

% World Scientific, Singapore, 2002.

% [3] Scholkopf, B., Smola, A.J., Learning with Kernels,

% MIT Press, Cambridge, MA. 2002.

%

% SVMTRAIN(...,'KFUNARGS',ARGS) allows you to pass additional

% arguments to kernel functions.

% set defaults

plotflag = false;

qp_opts = [];

kfunargs = {};

setPoly = false; usePoly = false;

setMLP = false; useMLP = false;

if ~isempty(which('quadprog'))

useQuadprog = true;

else

useQuadprog = false;

end

% set default kernel function

kfun = @linear_kernel;

% check inputs

if nargin 2

error(nargchk(2,Inf,nargin))

end

numoptargs = nargin -2;

optargs = varargin;

% grp2idx sorts a numeric grouping var ascending, and a string grouping

% var by order of first occurrence

[g,groupString] = grp2idx(groupnames);

% check group is a vector -- though char input is special...

if ~isvector(groupnames) ~ischar(groupnames)

error('Bioinfo:svmtrain:GroupNotVector',...

'Group must be a vector.');

end

% make sure that the data is correctly oriented.

if size(groupnames,1) == 1

groupnames = groupnames';

end

% make sure data is the right size

n = length(groupnames);

if size(training,1) ~= n

if size(training,2) == n

training = training';

else

error('Bioinfo:svmtrain:DataGroupSizeMismatch',...

'GROUP and TRAINING must have the same number of rows.')

end

end

% NaNs are treated as unknown classes and are removed from the training

% data

nans = find(isnan(g));

if length(nans) 0

training(nans,:) = [];

g(nans) = [];

end

ngroups = length(groupString);

if ngroups 2

error('Bioinfo:svmtrain:TooManyGroups',...

'SVMTRAIN only supports classification into two groups.\nGROUP contains %d different groups.',ngroups)

end

% convert to 1, -1.

g = 1 - (2* (g-1));

% handle optional arguments

if numoptargs = 1

if rem(numoptargs,2)== 1

error('Bioinfo:svmtrain:IncorrectNumberOfArguments',...

'Incorrect number of arguments to %s.',mfilename);

end

okargs = {'kernel_function','method','showplot','kfunargs','quadprog_opts','polyorder','mlp_params'};

for j=1:2:numoptargs

pname = optargs{j};

pval = optargs{j+1};

k = strmatch(lower(pname), okargs);%#ok

if isempty(k)

error('Bioinfo:svmtrain:UnknownParameterName',...

'Unknown parameter name: %s.',pname);

elseif length(k)1

error('Bioinfo:svmtrain:AmbiguousParameterName',...

'Ambiguous parameter name: %s.',pname);

else

switch(k)

case 1 % kernel_function

if ischar(pval)

okfuns = {'linear','quadratic',...

'radial','rbf','polynomial','mlp'};

funNum = strmatch(lower(pval), okfuns);%#ok

if isempty(funNum)

funNum = 0;

end

switch funNum %maybe make this less strict in the future

case 1

kfun = @linear_kernel;

case 2

kfun = @quadratic_kernel;

case {3,4}

kfun = @rbf_kernel;

case 5

kfun = @poly_kernel;

usePoly = true;

case 6

kfun = @mlp_kernel;

useMLP = true;

otherwise

error('Bioinfo:svmtrain:UnknownKernelFunction',...

'Unknown Kernel Function %s.',kfun);

end

elseif isa (pval, 'function_handle')

kfun = pval;

else

error('Bioinfo:svmtrain:BadKernelFunction',...

'The kernel function input does not appear to be a function handle\nor valid function name.')

end

case 2 % method

if strncmpi(pval,'qp',2)

useQuadprog = true;

if isempty(which('quadprog'))

warning('Bioinfo:svmtrain:NoOptim',...

'The Optimization Toolbox is required to use the quadratic programming method.')

useQuadprog = false;

end

elseif strncmpi(pval,'ls',2)

useQuadprog = false;

else

error('Bioinfo:svmtrain:UnknownMethod',...

'Unknown method option %s. Valid methods are ''QP'' and ''LS''',pval);

end

case 3 % display

if pval ~= 0

if size(training,2) == 2

plotflag = true;

else

warning('Bioinfo:svmtrain:OnlyPlot2D',...

'The display option can only plot 2D training data.')

end

end

case 4 % kfunargs

if iscell(pval)

kfunargs = pval;

else

kfunargs = {pval};

end

case 5 % quadprog_opts

if isstruct(pval)

qp_opts = pval;

elseif iscell(pval)

qp_opts = optimset(pval{:});

else

error('Bioinfo:svmtrain:BadQuadprogOpts',...

'QUADPROG_OPTS must be an opts structure.');

end

case 6 % polyorder

if ~isscalar(pval) || ~isnumeric(pval)

error('Bioinfo:svmtrain:BadPolyOrder',...

'POLYORDER must be a scalar value.');

end

if pval ~=floor(pval) || pval 1

error('Bioinfo:svmtrain:PolyOrderNotInt',...

'The order of the polynomial kernel must be a positive integer.')

end

kfunargs = {pval};

setPoly = true;

case 7 % mlpparams

if numel(pval)~=2

error('Bioinfo:svmtrain:BadMLPParams',...

'MLP_PARAMS must be a two element array.');

end

if ~isscalar(pval(1)) || ~isscalar(pval(2))

error('Bioinfo:svmtrain:MLPParamsNotScalar',...

'The parameters of the multi-layer perceptron kernel must be scalar.');

end

kfunargs = {pval(1),pval(2)};

setMLP = true;

end

end

end

end

if setPoly ~usePoly

warning('Bioinfo:svmtrain:PolyOrderNotPolyKernel',...

'You specified a polynomial order but not a polynomial kernel');

end

if setMLP ~useMLP

warning('Bioinfo:svmtrain:MLPParamNotMLPKernel',...

'You specified MLP parameters but not an MLP kernel');

end

% plot the data if requested

if plotflag

[hAxis,hLines] = svmplotdata(training,g);

legend(hLines,cellstr(groupString));

end

% calculate kernel function

try

kx = feval(kfun,training,training,kfunargs{:});

% ensure function is symmetric

kx = (kx+kx')/2;

catch

error('Bioinfo:svmtrain:UnknownKernelFunction',...

'Error calculating the kernel function:\n%s\n', lasterr);

end

% create Hessian

% add small constant eye to force stability

H =((g*g').*kx) + sqrt(eps(class(training)))*eye(n);

if useQuadprog

% The large scale solver cannot handle this type of problem, so turn it

% off.

qp_opts = optimset(qp_opts,'LargeScale','Off');

% X=QUADPROG(H,f,A,b,Aeq,beq,LB,UB,X0,opts)

alpha = quadprog(H,-ones(n,1),[],[],...

g',0,zeros(n,1),inf *ones(n,1),zeros(n,1),qp_opts);

% The support vectors are the non-zeros of alpha

svIndex = find(alpha sqrt(eps));

sv = training(svIndex,:);

% calculate the parameters of the separating line from the support

% vectors.

alphaHat = g(svIndex).*alpha(svIndex);

% Calculate the bias by applying the indicator function to the support

% vector with largest alpha.

[maxAlpha,maxPos] = max(alpha); %#ok

bias = g(maxPos) - sum(alphaHat.*kx(svIndex,maxPos));

% an alternative method is to average the values over all support vectors

% bias = mean(g(sv)' - sum(alphaHat(:,ones(1,numSVs)).*kx(sv,sv)));

% An alternative way to calculate support vectors is to look for zeros of

% the Lagrangians (fifth output from QUADPROG).

%

% [alpha,fval,output,exitflag,t] = quadprog(H,-ones(n,1),[],[],...

% g',0,zeros(n,1),inf *ones(n,1),zeros(n,1),opts);

%

% sv = t.lower sqrt(eps) t.upper sqrt(eps);

else % Least-Squares

% now build up compound matrix for solver

A = [0 g';g,H];

b = [0;ones(size(g))];

x = A\b;

% calculate the parameters of the separating line from the support

% vectors.

sv = training;

bias = x(1);

alphaHat = g.*x(2:end);

end

svm_struct.SupportVectors = sv;

svm_struct.Alpha = alphaHat;

svm_struct.Bias = bias;

svm_struct.KernelFunction = kfun;

svm_struct.KernelFunctionArgs = kfunargs;

svm_struct.GroupNames = groupnames;

svm_struct.FigureHandles = [];

if plotflag

hSV = svmplotsvs(hAxis,svm_struct);

svm_struct.FigureHandles = {hAxis,hLines,hSV};

end

如何通俗易懂地解释支持向量回归

超级通俗的解释:

支持向量机是用来解决分类问题的。

先考虑最简单的情况,豌豆和米粒,用晒子很快可以分开,小颗粒漏下去,大颗粒保留。

用一个函数来表示就是当直径d大于某个值D,就判定为豌豆,小于某个值就是米粒。

dD, 豌豆

d

在数轴上就是在d左边就是米粒,右边就是绿豆,这是一维的情况。

但是实际问题没这么简单,考虑的问题不单单是尺寸,一个花的两个品种,怎么分类?

假设决定他们分类的有两个属性,花瓣尺寸和颜色。单独用一个属性来分类,像刚才分米粒那样,就不行了。这个时候我们设置两个值 尺寸x和颜色y.

我们把所有的数据都丢到x-y平面上作为点,按道理如果只有这两个属性决定了两个品种,数据肯定会按两类聚集在这个二维平面上。

我们只要找到一条直线,把这两类划分开来,分类就很容易了,以后遇到一个数据,就丢进这个平面,看在直线的哪一边,就是哪一类。

比如x+y-2=0这条直线,我们把数据(x,y)代入,只要认为x+y-20的就是A类,x+y-20的就是B类。

以此类推,还有三维的,四维的,N维的 属性的分类,这样构造的也许就不是直线,而是平面,超平面。

一个三维的函数分类 :x+y+z-2=0,这就是个分类的平面了。

有时候,分类的那条线不一定是直线,还有可能是曲线,我们通过某些函数来转换,就可以转化成刚才的哪种多维的分类问题,这个就是核函数的思想。

例如:分类的函数是个圆形x^2+y^2-4=0。这个时候令x^2=a; y^2=b,还不就变成了a+b-4=0 这种直线问题了。

这就是支持向量机的思想。

机的意思就是 算法,机器学习领域里面常常用“机”这个字表示算法

支持向量意思就是 数据集种的某些点,位置比较特殊,比如刚才提到的x+y-2=0这条直线,直线上面区域x+y-20的全是A类,下面的x+y-20的全是B类,我们找这条直线的时候,一般就看聚集在一起的两类数据,他们各自的最边缘位置的点,也就是最靠近划分直线的那几个点,而其他点对这条直线的最终位置的确定起不了作用,所以我姑且叫这些点叫“支持点”(意思就是有用的点),但是在数学上,没这种说法,数学里的点,又可以叫向量,比如二维点(x,y)就是二维向量,三维度的就是三维向量( x,y,z)。所以 “支持点”改叫“支持向量”,听起来比较专业,NB。

所以就是 支持向量机 了。

非线性支持向量回归机的Matlab程序

function [Alpha1,Alpha2,Alpha,Flag,B]=SVMNR(X,Y,Epsilon,C,TKF,Para1,Para2)

%%

% SVMNR.m

% Support Vector Machine for Nonlinear Regression

% All rights reserved

%%

% 支持向量机非线性回归通用程序

% GreenSim团队原创作品,转载请注明

% Email:greensim@163.com

% GreenSim团队主页:

% 欢迎访问GreenSim——算法仿真团队→

% 程序功能:

% 使用支持向量机进行非线性回归,得到非线性函数y=f(x1,x2,…,xn)的支持向量解析式,

% 求解二次规划时调用了优化工具箱的quadprog函数。本函数在程序入口处对数据进行了

% [-1,1]的归一化处理,所以计算得到的回归解析式的系数是针对归一化数据的,仿真测

% 试需使用与本函数配套的Regression函数。

% 主要参考文献:

% 朱国强,刘士荣等.支持向量机及其在函数逼近中的应用.华东理工大学学报

% 输入参数列表

% X 输入样本原始数据,n×l的矩阵,n为变量个数,l为样本个数

% Y 输出样本原始数据,1×l的矩阵,l为样本个数

% Epsilon ε不敏感损失函数的参数,Epsilon越大,支持向量越少

% C 惩罚系数,C过大或过小,泛化能力变差

% TKF Type of Kernel Function 核函数类型

% TKF=1 线性核函数,注意:使用线性核函数,将进行支持向量机的线性回归

% TKF=2 多项式核函数

% TKF=3 径向基核函数

% TKF=4 指数核函数

% TKF=5 Sigmoid核函数

% TKF=任意其它值,自定义核函数

% Para1 核函数中的第一个参数

% Para2 核函数中的第二个参数

% 注:关于核函数参数的定义请见Regression.m和SVMNR.m内部的定义

% 输出参数列表

% Alpha1 α系数

% Alpha2 α*系数

% Alpha 支持向量的加权系数(α-α*)向量

% Flag 1×l标记,0对应非支持向量,1对应边界支持向量,2对应标准支持向量

% B 回归方程中的常数项

%--------------------------------------------------------------------------

%%

%-----------------------数据归一化处理--------------------------------------

nntwarn off

X=premnmx(X);

Y=premnmx(Y);

%%

%%

%-----------------------核函数参数初始化------------------------------------

switch TKF

case 1

%线性核函数 K=sum(x.*y)

%没有需要定义的参数

case 2

%多项式核函数 K=(sum(x.*y)+c)^p

c=Para1;%c=0.1;

p=Para2;%p=2;

case 3

%径向基核函数 K=exp(-(norm(x-y))^2/(2*sigma^2))

sigma=Para1;%sigma=6;

case 4

%指数核函数 K=exp(-norm(x-y)/(2*sigma^2))

sigma=Para1;%sigma=3;

case 5

%Sigmoid核函数 K=1/(1+exp(-v*sum(x.*y)+c))

v=Para1;%v=0.5;

c=Para2;%c=0;

otherwise

%自定义核函数,需由用户自行在函数内部修改,注意要同时修改好几处!

%暂时定义为 K=exp(-(sum((x-y).^2)/(2*sigma^2)))

sigma=Para1;%sigma=8;

end

%%

%%

%-----------------------构造K矩阵-------------------------------------------

l=size(X,2);

K=zeros(l,l);%K矩阵初始化

for i=1:l

for j=1:l

x=X(:,i);

y=X(:,j);

switch TKF%根据核函数的类型,使用相应的核函数构造K矩阵

case 1

K(i,j)=sum(x.*y);

case 2

K(i,j)=(sum(x.*y)+c)^p;

case 3

K(i,j)=exp(-(norm(x-y))^2/(2*sigma^2));

case 4

K(i,j)=exp(-norm(x-y)/(2*sigma^2));

case 5

K(i,j)=1/(1+exp(-v*sum(x.*y)+c));

otherwise

K(i,j)=exp(-(sum((x-y).^2)/(2*sigma^2)));

end

end

end

%%

%%

%------------构造二次规划模型的参数H,Ft,Aeq,Beq,lb,ub------------------------

%支持向量机非线性回归,回归函数的系数,要通过求解一个二次规划模型得以确定

Ft=[Epsilon*ones(1,l)-Y,Epsilon*ones(1,l)+Y];

Aeq=[ones(1,l),-ones(1,l)];

Beq=0;

ub=C*ones(2*l,1);

%%

%%

%--------------调用优化工具箱quadprog函数求解二次规划------------------------

OPT=optimset;

OPT.LargeScale='off';

OPT.Display='off';

%%

%%

%------------------------整理输出回归方程的系数------------------------------

Alpha1=(Gamma(1:l,1))';

Alpha2=(Gamma((l+1):end,1))';

Alpha=Alpha1-Alpha2;

Flag=2*ones(1,l);

%%

%%

%---------------------------支持向量的分类----------------------------------

Err=0.000000000001;

for i=1:l

AA=Alpha1(i);

BB=Alpha2(i);

if (abs(AA-0)=Err)(abs(BB-0)=Err)

Flag(i)=0;%非支持向量

end

if (AAErr)(AAC-Err)(abs(BB-0)=Err)

Flag(i)=2;%标准支持向量

end

if (abs(AA-0)=Err)(BBErr)(BBC-Err)

Flag(i)=2;%标准支持向量

end

if (abs(AA-C)=Err)(abs(BB-0)=Err)

Flag(i)=1;%边界支持向量

end

if (abs(AA-0)=Err)(abs(BB-C)=Err)

Flag(i)=1;%边界支持向量

end

end

%%

%%

%--------------------计算回归方程中的常数项B---------------------------------

B=0;

counter=0;

for i=1:l

AA=Alpha1(i);

BB=Alpha2(i);

if (AAErr)(AAC-Err)(abs(BB-0)=Err)

%计算支持向量加权值

SUM=0;

for j=1:l

if Flag(j)0

switch TKF

case 1

SUM=SUM+Alpha(j)*sum(X(:,j).*X(:,i));

case 2

SUM=SUM+Alpha(j)*(sum(X(:,j).*X(:,i))+c)^p;

case 3

SUM=SUM+Alpha(j)*exp(-(norm(X(:,j)-X(:,i)))^2/(2*sigma^2));

case 4

SUM=SUM+Alpha(j)*exp(-norm(X(:,j)-X(:,i))/(2*sigma^2));

case 5

SUM=SUM+Alpha(j)*1/(1+exp(-v*sum(X(:,j).*X(:,i))+c));

otherwise

SUM=SUM+Alpha(j)*exp(-(sum((X(:,j)-X(:,i)).^2)/(2*sigma^2)));

end

end

end

b=Y(i)-SUM-Epsilon;

B=B+b;

counter=counter+1;

end

if (abs(AA-0)=Err)(BBErr)(BBC-Err)

SUM=0;

for j=1:l

if Flag(j)0

switch TKF

case 1

SUM=SUM+Alpha(j)*sum(X(:,j).*X(:,i));

case 2

SUM=SUM+Alpha(j)*(sum(X(:,j).*X(:,i))+c)^p;

case 3

SUM=SUM+Alpha(j)*exp(-(norm(X(:,j)-X(:,i)))^2/(2*sigma^2));

case 4

SUM=SUM+Alpha(j)*exp(-norm(X(:,j)-X(:,i))/(2*sigma^2));

case 5

SUM=SUM+Alpha(j)*1/(1+exp(-v*sum(X(:,j).*X(:,i))+c));

otherwise

SUM=SUM+Alpha(j)*exp(-(sum((X(:,j)-X(:,i)).^2)/(2*sigma^2)));

end

end

end

b=Y(i)-SUM+Epsilon;

B=B+b;

counter=counter+1;

end

end

if counter==0

B=0;

else

B=B/counter;

end

给分吧!

关于支持向量机回归代码和支持向量机回归代码实现的介绍到此就结束了,不知道你从中找到你需要的信息了吗 ?如果你还想了解更多这方面的信息,记得收藏关注本站。

版权说明:如非注明,本站文章均为 AH站长 原创,转载请注明出处和附带本文链接;

本文地址:http://ahzz.com.cn/post/21922.html


取消回复欢迎 发表评论:

分享到

温馨提示

下载成功了么?或者链接失效了?

联系我们反馈

立即下载