关键词[���������]相关搜索结果,共搜索到3963条结果

oeasy教您玩转linux010108到底哪个which

回忆上次内容😌我们上次讲了查找命令位置whereis我想找到whereis的位置怎么办?🤔whereiswhereis命令有三种位置二进制代码binary源代码source帮助手册manual我们找到了ls对应的源代码的位置,但是我们有的时候会面对这样的问题,一个命令有多条二进制代码和他对应.我们到使用的是哪个?🤔到底哪个?which🤔比如我们想知道我们使用的java在哪里?🙄whereisjava我只想查java的二进制文件whereis-bjava也有好多,到底哪个?🤔比如我们想知道我们使用的java在哪里?whichjava这样我们就得到了二进制里面的第一个,也就是我们执行命令时候对应硬盘的位置.我们来玩吧🤗各种命令都来当which的参数whichpwdwhichunamewhichwhatiswhichwhereis现在我们有了三个灵魂之问了✊whatis你是谁whereis你在哪which到底在哪通过这三个命令我们可以知道,任何命令的作用、位置,我们给这三个问号起名叫灵魂三问!👊灵魂三问我们来对cat命令,试试这个灵魂三问whatiscatwhereiscatwhichcat有了这三个命令我们就可以了解任何命令的基本信息了!我们再去问问什么命令呢?🤔下次再说!👋上一章010107whereis参与制作去做实验下一章010109clear

Manthan, Codefest 19 (open for everyone, rated, Div. 1 + Div. 2) G. Polygons 数论

G.PolygonsDescriptionYouaregiventwointegers𝑛and𝑘.Youneedtoconstruct𝑘regularpolygonshavingsamecircumcircle,withdistinctnumberofsides𝑙between3and𝑛.Illustrationforthefirstexample.Youcanrotatethemtominimizethetotalnumberofdistinctpointsonthecircle.Findtheminimumnumberofsuchpoints.InputTheonlylineofinputcontainstwointegers𝑛and𝑘(3≤𝑛≤106,1≤𝑘≤𝑛−2),themaximumnumberofsidesofapolygonandthenumberofpolygonstoconstruct,respectively.OutputPrintasingleinteger—theminimumnumberofpointsrequiredfor𝑘polygons.Examplesinput62outputinput20050output708NoteInthefirstexample,wehave𝑛=6and𝑘=2.So,wehave4polygonswithnumberofsides3,4,5and6tochoosefromandifwechoosethetriangleandthehexagon,thenwecanarrangethemasshowninthepictureinthestatement.Hence,theminimumnumberofpointsrequiredonthecircleis6,whichisalsotheminimumoverallpossiblesets.题意给你n和k,让你从3~n个点的正多边形中选出k个,使得他们在同一个外接圆的情况下,点数最少。题解简单的思考,如果b是a的因子,那么在同一个外接圆的情况下,已经选了a,再选个b肯定不会多任何一个点的。首先一个,我们定一个圆上的公共点P;那么对于每一个正多变形k在圆上的点分别离P的距离为1/k,2/k,3/k....,k-1/k。那么这道题的答案就是所有的正多边形的不同的分数的个数。在保证选A之前,A的所有因子都已经被选择的情况下,那么答案实际上就是欧拉函数的和。代码#include<bits/stdc++.h>usingnamespacestd;intn,k;constintmaxn=1e6+7;intphi[maxn];voidget_phi(intn){iota(phi,phi+n+1,0);for(inti=2;i<=n;i++){if(phi[i]==i){phi[i]=i-1;for(intj=2*i;j<=n;j+=i){phi[j]=(phi[j]/i)*(i-1);}}}}intmain(){cin>>n>>k;if(k==1){cout<<"3"<<endl;return0;}k=k+2;get_phi(n);sort(phi+1,phi+1+n);cout<<accumulate(phi+1,phi+1+k,0ll)<<endl;}

cs231n Assignment1相关代码

1.KNNknn_nearest_neighbor.pyfrombuiltinsimportrangefrombuiltinsimportobjectimportnumpyasnpfrompast.builtinsimportxrangeclassKNearestNeighbor(object):"""akNNclassifierwithL2distance"""def__init__(self):passdeftrain(self,X,y):"""Traintheclassifier.Fork-nearestneighborsthisisjustmemorizingthetrainingdata.Inputs:-X:Anumpyarrayofshape(num_train,D)containingthetrainingdataconsistingofnum_trainsampleseachofdimensionD.-y:Anumpyarrayofshape(N,)containingthetraininglabels,wherey[i]isthelabelforX[i]."""self.X_train=Xself.y_train=ydefpredict(self,X,k=1,num_loops=0):"""Predictlabelsfortestdatausingthisclassifier.Inputs:-X:Anumpyarrayofshape(num_test,D)containingtestdataconsistingofnum_testsampleseachofdimensionD.-k:Thenumberofnearestneighborsthatvoteforthepredictedlabels.-num_loops:Determineswhichimplementationtousetocomputedistancesbetweentrainingpointsandtestingpoints.Returns:-y:Anumpyarrayofshape(num_test,)containingpredictedlabelsforthetestdata,wherey[i]isthepredictedlabelforthetestpointX[i]."""ifnum_loops==0:dists=self.compute_distances_no_loops(X)elifnum_loops==1:dists=self.compute_distances_one_loop(X)elifnum_loops==2:dists=self.compute_distances_two_loops(X)else:raiseValueError('Invalidvalue%dfornum_loops'%num_loops)returnself.predict_labels(dists,k=k)defcompute_distances_two_loops(self,X):"""ComputethedistancebetweeneachtestpointinXandeachtrainingpointinself.X_trainusinganestedloopoverboththetrainingdataandthetestdata.Inputs:-X:Anumpyarrayofshape(num_test,D)containingtestdata.Returns:-dists:Anumpyarrayofshape(num_test,num_train)wheredists[i,j]istheEuclideandistancebetweentheithtestpointandthejthtrainingpoint."""num_test=X.shape[0]num_train=self.X_train.shape[0]dists=np.zeros((num_test,num_train))foriinrange(num_test):forjinrange(num_train):######################################################################TODO:##Computethel2distancebetweentheithtestpointandthejth##trainingpoint,andstoretheresultindists[i,j].Youshould##notusealoopoverdimension,norusenp.linalg.norm().#######################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****dists[i][j]=np.sqrt(np.sum(np.square(self.X_train[j,:]-X[i,:])))pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returndistsdefcompute_distances_one_loop(self,X):"""ComputethedistancebetweeneachtestpointinXandeachtrainingpointinself.X_trainusingasingleloopoverthetestdata.Input/Output:Sameascompute_distances_two_loops"""num_test=X.shape[0]num_train=self.X_train.shape[0]dists=np.zeros((num_test,num_train))foriinrange(num_test):########################################################################TODO:##Computethel2distancebetweentheithtestpointandalltraining##points,andstoretheresultindists[i,:].##Donotusenp.linalg.norm().#########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****dists[i,:]=np.sqrt(np.sum(np.square(X[i]-self.X_train),axis=1))pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returndistsdefcompute_distances_no_loops(self,X):"""ComputethedistancebetweeneachtestpointinXandeachtrainingpointinself.X_trainusingnoexplicitloops.Input/Output:Sameascompute_distances_two_loops"""num_test=X.shape[0]num_train=self.X_train.shape[0]dists=np.zeros((num_test,num_train))##########################################################################TODO:##Computethel2distancebetweenalltestpointsandalltraining##pointswithoutusinganyexplicitloops,andstoretheresultin##dists.####Youshouldimplementthisfunctionusingonlybasicarrayoperations;##inparticularyoushouldnotusefunctionsfromscipy,##norusenp.linalg.norm().####HINT:Trytoformulatethel2distanceusingmatrixmultiplication##andtwobroadcastsums.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****X_test_squ_array=np.sum(np.square(X),axis=1)X_test_squ=np.tile(X_test_squ_array.reshape(num_test,1),(1,num_train))#printX_test_squ.shapeX_train_squ_array=np.sum(np.square(self.X_train),axis=1)X_train_squ=np.tile(X_train_squ_array,(num_test,1))#printX_train_squ.shapex_te_tr=np.dot(X,self.X_train.T)#printx_te_tr.shapedists=X_test_squ+X_train_squ_array-2*x_te_trpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returndistsdefpredict_labels(self,dists,k=1):"""Givenamatrixofdistancesbetweentestpointsandtrainingpoints,predictalabelforeachtestpoint.Inputs:-dists:Anumpyarrayofshape(num_test,num_train)wheredists[i,j]givesthedistancebetwentheithtestpointandthejthtrainingpoint.Returns:-y:Anumpyarrayofshape(num_test,)containingpredictedlabelsforthetestdata,wherey[i]isthepredictedlabelforthetestpointX[i]."""num_test=dists.shape[0]y_pred=np.zeros(num_test)foriinrange(num_test):#Alistoflengthkstoringthelabelsoftheknearestneighborsto#theithtestpoint.closest_y=[]##########################################################################TODO:##Usethedistancematrixtofindtheknearestneighborsoftheith##testingpoint,anduseself.y_traintofindthelabelsofthese##neighbors.Storetheselabelsinclosest_y.##Hint:Lookupthefunctionnumpy.argsort.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****kids=np.argsort(dists[i])closest_y=self.y_train[kids[:k]]pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****##########################################################################TODO:##Nowthatyouhavefoundthelabelsoftheknearestneighbors,you##needtofindthemostcommonlabelinthelistclosest_yoflabels.##Storethislabeliny_pred[i].Breaktiesbychoosingthesmaller##label.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****count=0label=0forjinclosest_y:tmp=0forkkinclosest_y:tmp+=(kk==j)iftmp>count:count=tmplabel=jy_pred[i]=labelpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returny_predQuestion1InlineQuestion1Noticethestructuredpatternsinthedistancematrix,wheresomerowsorcolumnsarevisiblebrighter.(Notethatwiththedefaultcolorschemeblackindicateslowdistanceswhilewhiteindicateshighdistances.)Whatinthedataisthecausebehindthedistinctlybrightrows?Whatcausesthecolumns?Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:Thebrightrowsmeansthatthetestpictureisdifferentfromallthetrainimages.Andthedifferenceformtraintothetestscausethebrightcolumns.Question2Thegeneralstandarddeviation𝜎andpixel-wisestandarddeviation𝜎𝑖𝑗isdefinedsimilarly.WhichofthefollowingpreprocessingstepswillnotchangetheperformanceofaNearestNeighborclassifierthatusesL1distance?Selectallthatapply.Subtractingthemean𝜇(𝑝̃(𝑘)𝑖𝑗=𝑝(𝑘)𝑖𝑗−𝜇.)Subtractingtheperpixelmean𝜇𝑖𝑗(𝑝̃(𝑘)𝑖𝑗=𝑝(𝑘)𝑖𝑗−𝜇𝑖𝑗.)Subtractingthemean𝜇anddividingbythestandarddeviation𝜎.Subtractingthepixel-wisemean𝜇𝑖𝑗anddividingbythepixel-wisestandarddeviation𝜎𝑖𝑗.Rotatingthecoordinateaxesofthedata.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:1,2,3Y𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:Thechoice1,2and3aretheNormalizedprcessmethods,sotheyareright.AndtheL1isboundtothesetofCoordinateSystemandthechoice5iswrong交叉验证num_folds=5k_choices=[1,3,5,8,10,12,15,20,50,100]X_train_folds=[]y_train_folds=[]#################################################################################TODO:##Splitupthetrainingdataintofolds.Aftersplitting,X_train_foldsand##y_train_foldsshouldeachbelistsoflengthnum_folds,where##y_train_folds[i]isthelabelvectorforthepointsinX_train_folds[i].##Hint:Lookupthenumpyarray_splitfunction.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****X_train_folds=np.split(X_train,5,axis=0)y_train_folds=np.split(y_train,5,axis=0)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Adictionaryholdingtheaccuraciesfordifferentvaluesofkthatwefind#whenrunningcross-validation.Afterrunningcross-validation,#k_to_accuracies[k]shouldbealistoflengthnum_foldsgivingthedifferent#accuracyvaluesthatwefoundwhenusingthatvalueofk.k_to_accuracies={}#################################################################################TODO:##Performk-foldcrossvalidationtofindthebestvalueofk.Foreach##possiblevalueofk,runthek-nearest-neighboralgorithmnum_foldstimes,##whereineachcaseyouuseallbutoneofthefoldsastrainingdataandthe##lastfoldasavalidationset.Storetheaccuraciesforallfoldandall##valuesofkinthek_to_accuraciesdictionary.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forkink_choices:accuracies=[]foriinrange(num_folds):X_test_cv=X_train_folds[i]X_train_cv=np.vstack(X_train_folds[:i]+X_train_folds[i+1:])y_test_cv=y_train_folds[i]y_train_cv=np.hstack(y_train_folds[:i]+y_train_folds[i+1:])classifier.train(X_train_cv,y_train_cv)dists_cv=classifier.compute_distances_no_loops(X_test_cv)y_test_pred=classifier.predict_labels(dists_cv,k)num_correct=np.sum(y_test_pred==y_test_cv)accuracies.append(float(num_correct)*num_folds/num_training)k_to_accuracies[k]=accuraciespass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutthecomputedaccuraciesforkinsorted(k_to_accuracies):foraccuracyink_to_accuracies[k]:print('k=%d,accuracy=%f'%(k,accuracy))Question3Whichofthefollowingstatementsabout𝑘-NearestNeighbor(𝑘-NN)aretrueinaclassificationsetting,andforall𝑘?Selectallthatapply.Thedecisionboundaryofthek-NNclassifierislinear.Thetrainingerrorofa1-NNwillalwaysbelowerthanthatof5-NN.Thetesterrorofa1-NNwillalwaysbelowerthanthatofa5-NN.Thetimeneededtoclassifyatestexamplewiththek-NNclassifiergrowswiththesizeofthetrainingset.Noneoftheabove.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:4Y𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:TheKNNandboundaryarenon-linear,sothe1,2arewrong.Thetesterrorofa1-NNwillnotalwaysbelowerthanthatofa5-NN.SVMlinear_svm.pyfrombuiltinsimportrangeimportnumpyasnpfromrandomimportshufflefrompast.builtinsimportxrangedefsvm_loss_naive(W,X,y,reg):"""StructuredSVMlossfunction,naiveimplementation(withloops).InputshavedimensionD,thereareCclasses,andweoperateonminibatchesofNexamples.Inputs:-W:Anumpyarrayofshape(D,C)containingweights.-X:Anumpyarrayofshape(N,D)containingaminibatchofdata.-y:Anumpyarrayofshape(N,)containingtraininglabels;y[i]=cmeansthatX[i]haslabelc,where0<=c<C.-reg:(float)regularizationstrengthReturnsatupleof:-lossassinglefloat-gradientwithrespecttoweightsW;anarrayofsameshapeasW"""dW=np.zeros(W.shape)#initializethegradientaszero#computethelossandthegradientnum_classes=W.shape[1]num_train=X.shape[0]loss=0.0foriinrange(num_train):scores=X[i].dot(W)correct_class_score=scores[y[i]]forjinrange(num_classes):ifj==y[i]:continuemargin=scores[j]-correct_class_score+1#notedelta=1ifmargin>0:loss+=margindW[:,y[i]]+=-X[i]#对应正确分类的梯度(D,)dW[:,j]+=X[i]#对应不正确分类的梯度#Rightnowthelossisasumoveralltrainingexamples,butwewantit#tobeanaverageinsteadsowedividebynum_train.loss/=num_traindW/=num_train#Addregularizationtotheloss.loss+=reg*np.sum(W*W)dW+=reg*W##############################################################################TODO:##ComputethegradientofthelossfunctionandstoreitdW.##Ratherthanfirstcomputingthelossandthencomputingthederivative,##itmaybesimplertocomputethederivativeatthesametimethatthe##lossisbeingcomputed.Asaresultyoumayneedtomodifysomeofthe##codeabovetocomputethegradient.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWdefsvm_loss_vectorized(W,X,y,reg):"""StructuredSVMlossfunction,vectorizedimplementation.Inputsandoutputsarethesameassvm_loss_naive."""loss=0.0dW=np.zeros(W.shape)#initializethegradientaszero##############################################################################TODO:##ImplementavectorizedversionofthestructuredSVMloss,storingthe##resultinloss.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****num_train=X.shape[0]scores=X.dot(W)margin=scores-scores[np.arange(num_train),y].reshape(num_train,1)+1margin[np.arange(num_train),y]=0.0#正确的这一列不该计算,归零margin=(margin>0)*marginloss+=margin.sum()/num_trainloss+=0.5*reg*np.sum(W*W)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****##############################################################################TODO:##ImplementavectorizedversionofthegradientforthestructuredSVM##loss,storingtheresultindW.####Hint:Insteadofcomputingthegradientfromscratch,itmaybeeasier##toreusesomeoftheintermediatevaluesthatyouusedtocomputethe##loss.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****margin=(margin>0)*1row_sum=np.sum(margin,axis=1)margin[np.arange(num_train),y]=-row_sumdW=X.T.dot(margin)/num_train+reg*Wpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWlinear_classifier.pyfrom__future__importprint_functionfrombuiltinsimportrangefrombuiltinsimportobjectimportnumpyasnpfromcs231n.classifiers.linear_svmimport*fromcs231n.classifiers.softmaximport*frompast.builtinsimportxrangeclassLinearClassifier(object):def__init__(self):self.W=Nonedeftrain(self,X,y,learning_rate=1e-3,reg=1e-5,num_iters=100,batch_size=200,verbose=False):"""Trainthislinearclassifierusingstochasticgradientdescent.Inputs:-X:Anumpyarrayofshape(N,D)containingtrainingdata;thereareNtrainingsampleseachofdimensionD.-y:Anumpyarrayofshape(N,)containingtraininglabels;y[i]=cmeansthatX[i]haslabel0<=c<CforCclasses.-learning_rate:(float)learningrateforoptimization.-reg:(float)regularizationstrength.-num_iters:(integer)numberofstepstotakewhenoptimizing-batch_size:(integer)numberoftrainingexamplestouseateachstep.-verbose:(boolean)Iftrue,printprogressduringoptimization.Outputs:Alistcontainingthevalueofthelossfunctionateachtrainingiteration."""num_train,dim=X.shapenum_classes=np.max(y)+1#assumeytakesvalues0...K-1whereKisnumberofclassesifself.WisNone:#lazilyinitializeWself.W=0.001*np.random.randn(dim,num_classes)#RunstochasticgradientdescenttooptimizeWloss_history=[]foritinrange(num_iters):X_batch=Noney_batch=None##########################################################################TODO:##Samplebatch_sizeelementsfromthetrainingdataandtheir##correspondinglabelstouseinthisroundofgradientdescent.##StorethedatainX_batchandtheircorrespondinglabelsin##y_batch;aftersamplingX_batchshouldhaveshape(batch_size,dim)##andy_batchshouldhaveshape(batch_size,)####Hint:Usenp.random.choicetogenerateindices.Samplingwith##replacementisfasterthansamplingwithoutreplacement.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****mask=np.random.choice(num_train,batch_size,replace=False)#replace=False没有重复X_batch=X[mask]y_batch=y[mask]pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#evaluatelossandgradientloss,grad=self.loss(X_batch,y_batch,reg)loss_history.append(loss)#performparameterupdate##########################################################################TODO:##Updatetheweightsusingthegradientandthelearningrate.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****self.W+=-learning_rate*gradpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****ifverboseandit%100==0:print('iteration%d/%d:loss%f'%(it,num_iters,loss))returnloss_historydefpredict(self,X):"""Usethetrainedweightsofthislinearclassifiertopredictlabelsfordatapoints.Inputs:-X:Anumpyarrayofshape(N,D)containingtrainingdata;thereareNtrainingsampleseachofdimensionD.Returns:-y_pred:PredictedlabelsforthedatainX.y_predisa1-dimensionalarrayoflengthN,andeachelementisanintegergivingthepredictedclass."""y_pred=np.zeros(X.shape[0])############################################################################TODO:##Implementthismethod.Storethepredictedlabelsiny_pred.#############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****score=X.dot(self.W)index=np.zeros(X.shape[0])index=np.argmax(score,axis=1)y_pred=indexpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returny_preddefloss(self,X_batch,y_batch,reg):"""Computethelossfunctionanditsderivative.Subclasseswilloverridethis.Inputs:-X_batch:Anumpyarrayofshape(N,D)containingaminibatchofNdatapoints;eachpointhasdimensionD.-y_batch:Anumpyarrayofshape(N,)containinglabelsfortheminibatch.-reg:(float)regularizationstrength.Returns:Atuplecontaining:-lossasasinglefloat-gradientwithrespecttoself.W;anarrayofthesameshapeasW"""passclassLinearSVM(LinearClassifier):"""AsubclassthatusestheMulticlassSVMlossfunction"""defloss(self,X_batch,y_batch,reg):returnsvm_loss_vectorized(self.W,X_batch,y_batch,reg)classSoftmax(LinearClassifier):"""AsubclassthatusestheSoftmax+Cross-entropylossfunction"""defloss(self,X_batch,y_batch,reg):returnsoftmax_loss_vectorized(self.W,X_batch,y_batch,reg)补充代码#Usethevalidationsettotunehyperparameters(regularizationstrengthand#learningrate).Youshouldexperimentwithdifferentrangesforthelearning#ratesandregularizationstrengths;ifyouarecarefulyoushouldbeableto#getaclassificationaccuracyofabout0.39onthevalidationset.#Note:youmayseeruntime/overflowwarningsduringhyper-parametersearch.#Thismaybecausedbyextremevalues,andisnotabug.#resultsisdictionarymappingtuplesoftheform#(learning_rate,regularization_strength)totuplesoftheform#(training_accuracy,validation_accuracy).Theaccuracyissimplythefraction#ofdatapointsthatarecorrectlyclassified.results={}best_val=-1#Thehighestvalidationaccuracythatwehaveseensofar.best_svm=None#TheLinearSVMobjectthatachievedthehighestvalidationrate.#################################################################################TODO:##Writecodethatchoosesthebesthyperparametersbytuningonthevalidation##set.Foreachcombinationofhyperparameters,trainalinearSVMonthe##trainingset,computeitsaccuracyonthetrainingandvalidationsets,and##storethesenumbersintheresultsdictionary.Inaddition,storethebest##validationaccuracyinbest_valandtheLinearSVMobjectthatachievesthis##accuracyinbest_svm.####Hint:Youshoulduseasmallvaluefornum_itersasyoudevelopyour##validationcodesothattheSVMsdon'ttakemuchtimetotrain;onceyouare##confidentthatyourvalidationcodeworks,youshouldrerunthevalidation##codewithalargervaluefornum_iters.##################################################################################Providedasareference.Youmayormaynotwanttochangethesehyperparameterslearning_rates=[1e-7,5e-5]regularization_strengths=[2.5e4,5e4]#*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****iters=2000#100forlrinlearning_rates:forrsinregularization_strengths:svm=LinearSVM()svm.train(X_train,y_train,learning_rate=lr,reg=rs,num_iters=iters)y_train_pred=svm.predict(X_train)acc_train=np.mean(y_train==y_train_pred)y_val_pred=svm.predict(X_val)acc_val=np.mean(y_val==y_val_pred)results[(lr,rs)]=(acc_train,acc_val)ifbest_val<acc_val:best_val=acc_valbest_svm=svmpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutresults.forlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)Question2Inlinequestion2DescribewhatyourvisualizedSVMweightslooklike,andofferabriefexplanationforwhytheylooktheywaythattheydo.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:Theylooklikecorrespodingpicturebutinblurry.SotheweightsarethetemplateineachclassSoftMaxsoftmax.pyfrombuiltinsimportrangeimportnumpyasnpfromrandomimportshufflefrompast.builtinsimportxrangedefsoftmax_loss_naive(W,X,y,reg):"""Softmaxlossfunction,naiveimplementation(withloops)InputshavedimensionD,thereareCclasses,andweoperateonminibatchesofNexamples.Inputs:-W:Anumpyarrayofshape(D,C)containingweights.-X:Anumpyarrayofshape(N,D)containingaminibatchofdata.-y:Anumpyarrayofshape(N,)containingtraininglabels;y[i]=cmeansthatX[i]haslabelc,where0<=c<C.-reg:(float)regularizationstrengthReturnsatupleof:-lossassinglefloat-gradientwithrespecttoweightsW;anarrayofsameshapeasW"""#Initializethelossandgradienttozero.loss=0.0dW=np.zeros_like(W)##############################################################################TODO:Computethesoftmaxlossanditsgradientusingexplicitloops.##StorethelossinlossandthegradientindW.Ifyouarenotcareful##here,itiseasytorunintonumericinstability.Don'tforgetthe##regularization!###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****num_classes=W.shape[1]num_train=X.shape[0]foriinrange(num_train):scores=X[i].dot(W)scores=scores-np.max(scores)scores_exp=np.exp(scores)#指数操作ds_w=np.repeat(X[i],num_classes).reshape(-1,num_classes)#计算得分对权重的倒数scores_exp_sum=np.sum(scores_exp)pk=scores_exp[y[i]]/scores_exp_sumloss+=-np.log(pk)dl_s=np.zeros(W.shape)#开始计算loss对得分的倒数forjinrange(num_classes):ifj==y[i]:dl_s[:,j]=pk-1#对于输入正确分类的那一项,倒数与其他不同else:dl_s[:,j]=scores_exp[j]/scores_exp_sumdW_i=ds_w*dl_sdW+=dW_iloss/=num_traindW/=num_trainloss+=reg*np.sum(W*W)dW+=W*2*reg#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWdefsoftmax_loss_vectorized(W,X,y,reg):"""Softmaxlossfunction,vectorizedversion.Inputsandoutputsarethesameassoftmax_loss_naive."""#Initializethelossandgradienttozero.loss=0.0dW=np.zeros_like(W)##############################################################################TODO:Computethesoftmaxlossanditsgradientusingnoexplicitloops.##StorethelossinlossandthegradientindW.Ifyouarenotcareful##here,itiseasytorunintonumericinstability.Don'tforgetthe##regularization!###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****num_classes=W.shape[1]num_train=X.shape[0]scores=X.dot(W)scores=scores-np.max(scores,1,keepdims=True)scores_exp=np.exp(scores)sum_s=np.sum(scores_exp,1,keepdims=True)p=scores_exp/sum_sloss=np.sum(-np.log(p[np.arange(num_train),y]))ind=np.zeros_like(p)ind[np.arange(num_train),y]=1dW=X.T.dot(p-ind)loss/=num_traindW/=num_trainloss+=reg*np.sum(W*W)dW+=W*2*regpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWQuestion1Whydoweexpectourlosstobecloseto-log(0.1)?Explainbriefly.**Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:-log(1/C)=-log0.1补充#Usethevalidationsettotunehyperparameters(regularizationstrengthand#learningrate).Youshouldexperimentwithdifferentrangesforthelearning#ratesandregularizationstrengths;ifyouarecarefulyoushouldbeableto#getaclassificationaccuracyofover0.35onthevalidationset.fromcs231n.classifiersimportSoftmaxresults={}best_val=-1best_softmax=None#################################################################################TODO:##Usethevalidationsettosetthelearningrateandregularizationstrength.##ThisshouldbeidenticaltothevalidationthatyoudidfortheSVM;save##thebesttrainedsoftmaxclassiferinbest_softmax.##################################################################################Providedasareference.Youmayormaynotwanttochangethesehyperparameterslearning_rates=[1e-7,5e-7]regularization_strengths=[2.5e4,5e4]#*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forlrinlearning_rates:forreginregularization_strengths:softmax=Softmax()loss_hist=softmax.train(X_train,y_train,lr,reg,num_iters=500,verbose=True)y_train_pred=softmax.predict(X_train)acc_tr=np.mean(y_train==y_train_pred)y_val_pred=softmax.predict(X_val)acc_val=np.mean(y_val==y_val_pred)results[(lr,reg)]=(acc_tr,acc_val)ifbest_val<acc_val:best_val=acc_valbest_softmax=softmaxpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutresults.forlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)Question2InlineQuestion2-TrueorFalseSupposetheoveralltraininglossisdefinedasthesumoftheper-datapointlossoveralltrainingexamples.ItispossibletoaddanewdatapointtoatrainingsetthatwouldleavetheSVMlossunchanged,butthisisnotthecasewiththeSoftmaxclassifierloss.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:TrueY𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:SVMisokonlyifthetrainingsetisenoughhuge.AndtheSoftmax'strainingsetamounthasnolimitationtwo_layer_netneural_net.pyfrom__future__importprint_functionfrombuiltinsimportrangefrombuiltinsimportobjectimportnumpyasnpimportmatplotlib.pyplotaspltfrompast.builtinsimportxrangeclassTwoLayerNet(object):"""Atwo-layerfully-connectedneuralnetwork.ThenethasaninputdimensionofN,ahiddenlayerdimensionofH,andperformsclassificationoverCclasses.WetrainthenetworkwithasoftmaxlossfunctionandL2regularizationontheweightmatrices.ThenetworkusesaReLUnonlinearityafterthefirstfullyconnectedlayer.Inotherwords,thenetworkhasthefollowingarchitecture:input-fullyconnectedlayer-ReLU-fullyconnectedlayer-softmaxTheoutputsofthesecondfully-connectedlayerarethescoresforeachclass."""def__init__(self,input_size,hidden_size,output_size,std=1e-4):"""Initializethemodel.Weightsareinitializedtosmallrandomvaluesandbiasesareinitializedtozero.Weightsandbiasesarestoredinthevariableself.params,whichisadictionarywiththefollowingkeys:W1:Firstlayerweights;hasshape(D,H)b1:Firstlayerbiases;hasshape(H,)W2:Secondlayerweights;hasshape(H,C)b2:Secondlayerbiases;hasshape(C,)Inputs:-input_size:ThedimensionDoftheinputdata.-hidden_size:ThenumberofneuronsHinthehiddenlayer.-output_size:ThenumberofclassesC."""self.params={}self.params['W1']=std*np.random.randn(input_size,hidden_size)self.params['b1']=np.zeros(hidden_size)self.params['W2']=std*np.random.randn(hidden_size,output_size)self.params['b2']=np.zeros(output_size)defloss(self,X,y=None,reg=0.0):"""Computethelossandgradientsforatwolayerfullyconnectedneuralnetwork.Inputs:-X:Inputdataofshape(N,D).EachX[i]isatrainingsample.-y:Vectoroftraininglabels.y[i]isthelabelforX[i],andeachy[i]isanintegerintherange0<=y[i]<C.Thisparameterisoptional;ifitisnotpassedthenweonlyreturnscores,andifitispassedthenweinsteadreturnthelossandgradients.-reg:Regularizationstrength.Returns:IfyisNone,returnamatrixscoresofshape(N,C)wherescores[i,c]isthescoreforclassconinputX[i].IfyisnotNone,insteadreturnatupleof:-loss:Loss(datalossandregularizationloss)forthisbatchoftrainingsamples.-grads:Dictionarymappingparameternamestogradientsofthoseparameterswithrespecttothelossfunction;hasthesamekeysasself.params."""#UnpackvariablesfromtheparamsdictionaryW1,b1=self.params['W1'],self.params['b1']W2,b2=self.params['W2'],self.params['b2']N,D=X.shape#Computetheforwardpassscores=None##############################################################################TODO:Performtheforwardpass,computingtheclassscoresfortheinput.##Storetheresultinthescoresvariable,whichshouldbeanarrayof##shape(N,C).###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****Z1=X.dot(W1)+b1A1=np.maximum(0,Z1)scores=A1.dot(W2)+b2pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Ifthetargetsarenotgiventhenjumpout,we'redoneifyisNone:returnscores#Computethelossloss=None##############################################################################TODO:Finishtheforwardpass,andcomputetheloss.Thisshouldinclude##boththedatalossandL2regularizationforW1andW2.Storetheresult##inthevariableloss,whichshouldbeascalar.UsetheSoftmax##classifierloss.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****scores-=np.max(scores,axis=1,keepdims=True)exp_scores=np.exp(scores)probs=exp_scores/np.sum(exp_scores,axis=1,keepdims=True)y_label=np.zeros((N,probs.shape[1]))y_label[np.arange(N),y]=1loss=(-1)*np.sum(np.multiply(np.log(probs),y_label))/Nloss+=reg*(np.sum(W1*W1)+np.sum(W2*W2))pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Backwardpass:computegradientsgrads={}##############################################################################TODO:Computethebackwardpass,computingthederivativesoftheweights##andbiases.Storetheresultsinthegradsdictionary.Forexample,##grads['W1']shouldstorethegradientonW1,andbeamatrixofsamesize###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****dZ2=probs-y_labeldW2=A1.T.dot(dZ2)dW2/=NdW2+=2*reg*W2db2=np.sum(dZ2,axis=0)/NdZ1=(dZ2).dot(W2.T)*(A1>0)dW1=X.T.dot(dZ1)/N+2*reg*W1db1=np.sum(dZ1,axis=0)/Ngrads['W2']=dW2grads['b2']=db2grads['W1']=dW1grads['b1']=db1pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,gradsdeftrain(self,X,y,X_val,y_val,learning_rate=1e-3,learning_rate_decay=0.95,reg=5e-6,num_iters=100,batch_size=200,verbose=False):"""Trainthisneuralnetworkusingstochasticgradientdescent.Inputs:-X:Anumpyarrayofshape(N,D)givingtrainingdata.-y:Anumpyarrayfshape(N,)givingtraininglabels;y[i]=cmeansthatX[i]haslabelc,where0<=c<C.-X_val:Anumpyarrayofshape(N_val,D)givingvalidationdata.-y_val:Anumpyarrayofshape(N_val,)givingvalidationlabels.-learning_rate:Scalargivinglearningrateforoptimization.-learning_rate_decay:Scalargivingfactorusedtodecaythelearningrateaftereachepoch.-reg:Scalargivingregularizationstrength.-num_iters:Numberofstepstotakewhenoptimizing.-batch_size:Numberoftrainingexamplestouseperstep.-verbose:boolean;iftrueprintprogressduringoptimization."""num_train=X.shape[0]iterations_per_epoch=max(num_train/batch_size,1)#UseSGDtooptimizetheparametersinself.modelloss_history=[]train_acc_history=[]val_acc_history=[]foritinrange(num_iters):X_batch=Noney_batch=None##########################################################################TODO:Createarandomminibatchoftrainingdataandlabels,storing##theminX_batchandy_batchrespectively.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****batch_inx=np.random.choice(num_train,batch_size)X_batch=X[batch_inx,:]y_batch=y[batch_inx]pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Computelossandgradientsusingthecurrentminibatchloss,grads=self.loss(X_batch,y=y_batch,reg=reg)loss_history.append(loss)##########################################################################TODO:Usethegradientsinthegradsdictionarytoupdatethe##parametersofthenetwork(storedinthedictionaryself.params)##usingstochasticgradientdescent.You'llneedtousethegradients##storedinthegradsdictionarydefinedabove.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****self.params['W1']-=learning_rate*grads['W1']self.params['b1']-=learning_rate*grads['b1']self.params['W2']-=learning_rate*grads['W2']self.params['b2']-=learning_rate*grads['b2']pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****ifverboseandit%100==0:print('iteration%d/%d:loss%f'%(it,num_iters,loss))#Everyepoch,checktrainandvalaccuracyanddecaylearningrate.ifit%iterations_per_epoch==0:#Checkaccuracytrain_acc=(self.predict(X_batch)==y_batch).mean()val_acc=(self.predict(X_val)==y_val).mean()train_acc_history.append(train_acc)val_acc_history.append(val_acc)#Decaylearningratelearning_rate*=learning_rate_decayreturn{'loss_history':loss_history,'train_acc_history':train_acc_history,'val_acc_history':val_acc_history,}defpredict(self,X):"""Usethetrainedweightsofthistwo-layernetworktopredictlabelsfordatapoints.ForeachdatapointwepredictscoresforeachoftheCclasses,andassigneachdatapointtotheclasswiththehighestscore.Inputs:-X:Anumpyarrayofshape(N,D)givingND-dimensionaldatapointstoclassify.Returns:-y_pred:Anumpyarrayofshape(N,)givingpredictedlabelsforeachoftheelementsofX.Foralli,y_pred[i]=cmeansthatX[i]ispredictedtohaveclassc,where0<=c<C."""y_pred=None############################################################################TODO:Implementthisfunction;itshouldbeVERYsimple!#############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****score=self.loss(X)y_pred=np.argmax(score,axis=1)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returny_pred补充best_net=None#storethebestmodelintothisresults={}best_val=-1learning_rates=[1.2e-3,1.5e-3,1.75e-3]regularization_strengths=[1,1.25,1.5,2]##################################################################################TODO:Tunehyperparametersusingthevalidationset.Storeyourbesttrained##modelinbest_net.####Tohelpdebugyournetwork,itmayhelptousevisualizationssimilartothe##onesweusedabove;thesevisualizationswillhavesignificantqualitative##differencesfromtheoneswesawaboveforthepoorlytunednetwork.####Tweakinghyperparametersbyhandcanbefun,butyoumightfinditusefulto##writecodetosweepthroughpossiblecombinationsofhyperparameters##automaticallylikewedidonthepreviousexercises.###################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forlrinlearning_rates:forreginregularization_strengths:net=TwoLayerNet(input_size,hidden_size,num_classes)loss_hist=net.train(X_train,y_train,X_val,y_val,num_iters=1000,batch_size=200,learning_rate=lr,learning_rate_decay=0.95,reg=reg,verbose=False)y_train_pred=net.predict(X_train)y_val_pred=net.predict(X_val)y_train_acc=np.mean(y_train_pred==y_train)y_val_acc=np.mean(y_val_pred==y_val)results[(lr,reg)]=[y_train_acc,y_val_acc]ify_val_acc>best_val:best_val=y_val_accbest_net=netforlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****QuestionNowthatyouhavetrainedaNeuralNetworkclassifier,youmayfindthatyourtestingaccuracyismuchlowerthanthetrainingaccuracy.Inwhatwayscanwedecreasethisgap?Selectallthatapply.Trainonalargerdataset.Addmorehiddenunits.Increasetheregularizationstrength.Noneoftheabove.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:1,3Y𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:Thelargerdatasetandregularizationstrengthwillenhancetheaccuracyeffeciently5.​feature补充1#Usethevalidationsettotunethelearningrateandregularizationstrengthfromcs231n.classifiers.linear_classifierimportLinearSVMlearning_rates=[1e-9,1e-8,1e-7]regularization_strengths=[5e4,5e5,5e6]results={}best_val=-1best_svm=None#################################################################################TODO:##Usethevalidationsettosetthelearningrateandregularizationstrength.##ThisshouldbeidenticaltothevalidationthatyoudidfortheSVM;save##thebesttrainedclassiferinbest_svm.Youmightalsowanttoplay##withdifferentnumbersofbinsinthecolorhistogram.Ifyouarecareful##youshouldbeabletogetaccuracyofnear0.44onthevalidationset.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forlrinlearning_rates:forreginregularization_strengths:svm=LinearSVM()loss_hist=svm.train(X_train_feats,y_train,learning_rate=lr,reg=reg,num_iters=1500,verbose=True)y_train_pred=svm.predict(X_train_feats)y_val_pred=svm.predict(X_val_feats)y_train_acc=np.mean(y_train_pred==y_train)y_val_acc=np.mean(y_val_pred==y_val)results[(lr,reg)]=[y_train_acc,y_val_acc]ify_val_acc>best_val:best_val=y_val_accbest_svm=svmpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutresults.forlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)补充2fromcs231n.classifiers.neural_netimportTwoLayerNetinput_dim=X_train_feats.shape[1]hidden_dim=500num_classes=10net=TwoLayerNet(input_dim,hidden_dim,num_classes)best_net=None#################################################################################TODO:Trainatwo-layerneuralnetworkonimagefeatures.Youmaywantto##cross-validatevariousparametersasinprevioussections.Storeyourbest##modelinthebest_netvariable.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****best_acc=-1learning_rate=[1e-2,1e-1,5e-1]regulations=[5e-3,1e-2,1e-1,0.5]forlrinlearning_rate:forreginregulations:stats=net.train(X_train_feats,y_train,X_val_feats,y_val,num_iters=1000,batch_size=200,learning_rate=lr,learning_rate_decay=0.95,reg=reg,verbose=True)val_acc=(net.predict(X_val_feats)==y_val).mean()ifval_acc>best_acc:best_acc=val_accbest_net=netprint('lr=',lr,'reg=',reg,'acc=',best_acc)print('best_acc:',best_acc)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****

【Markdown】之使用教程

Markdown教程https://testerhome.com/markdownGuide这是一篇讲解如何正确使用Markdown的排版示例,学会这个很有必要,能让你的文章有更佳清晰的排版。引用文本:Markdownisatextformattingsyntaxinspired语法指导普通内容这段内容展示了在内容里面一些小的格式,比如:加粗-**加粗**倾斜-*倾斜*删除线-~~删除线~~Code标记-\Code标记``超级链接-[超级链接](http://github.com)username@gmail.com-[username@gmail.com](mailto:username@gmail.com)提及用户@foo@bar@someone...通过@可以在发帖和回帖里面提及用户,信息提交以后,被提及的用户将会收到系统通知。以便让他来关注这个帖子或回帖。表情符号Emoji支持表情符号,你可以用系统默认的Emoji符号(无法支持Windows用户)。也可以用图片的表情,输入:将会出现智能提示。一些表情例子😄😆😵😭😰😅😢😤😍☺️😎😩👍👎💯👏🔔🎁❓💣❤️☕🌀🙇💋🙏💦💩❗💢大标题-Heading3你可以选择使用H2至H6,使用##(N)打头,H1不能使用,会自动转换成H2。NOTE:别忘了#后面需要有空格!Heading4Heading5Heading6图片![alt文本](http://image-path.png)![alt文本](http://image-path.png"图片Title值")![设置图片宽度高度](http://image-path.png=300x200)![设置图片宽度](http://image-path.png=300x)![设置图片高度](http://image-path.png=x200)代码块普通*emphasize***strong**_emphasize___strong__@a=1语法高亮支持如果在```后面更随语言名称,可以有语法高亮的效果哦,比如:演示Ruby代码高亮classPostController<ApplicationControllerdefindex@posts=Post.last_actived.limit(10)endend演示RailsView高亮<%=@posts.eachdo|post|%><divclass="post"></div><%end%>演示YAML文件zh-CN:name:姓名age:年龄Tip:语言名称支持下面这些:ruby,python,js,html,erb,css,coffee,bash,json,yml,xml...有序、无序列表无序列表RubyRailsActiveRecordGoGofmtRevelNode.jsKoaExpress有序列表Node.jsExpressKoaSailsRubyRailsSinatraGo表格如果需要展示数据什么的,可以选择使用表格哦header1header3cell1cell2cell3cell4cell5cell6段落留空白的换行,将会被自动转换成一个段落,会有一定的段落间距,便于阅读。请注意后面Markdown源代码的换行留空情况。视频插入目前支持Youtube和Youku两家的视频插入,你只需要复制视频播放页面,浏览器地址栏的网页URL地址,并粘贴到话题/回复文本框,提交以后将自动转换成视频播放器。例如Youtubehttps://www.youtube.com/watch?v=CvVvwh3BRq8Vimeohttps://vimeo.com/199770305Youkuhttp://v.youku.com/v_show/id_XMjQzMTY1MDk3Ng==.html···字体颜色浅红色文字:浅红色文字:深红色文字:深红色文字浅绿色文字:浅绿色文字深绿色文字:深绿色文字浅蓝色文字:浅蓝色文字深蓝色文字:深蓝色文字浅黄色文字:浅黄色文字深黄色文字:深黄色文字浅青色文字:浅青色文字深青色文字:深青色文字浅紫色文字:浅紫色文字深紫色文字:深紫色文字

Java-String类

String结构创建String对象的两种方式习题字符串的特性🔥String类的常用方法【String结构】(1)String对象用于保存字符串,也就是一组字符序列(用双引号包裹);(2)字符串的字符使用Unicode字符编码,一个字符(不区分字母还是汉字)占两个字节;(3)String类有很多构造器,常用的有:Strings1=newString();Strings2=newString(Stringoriginal);Strings3=newString(char[]a);Strings4=newString(char[]a,intstartIndex,intcount);Strings5=newString(byte[]b);(4)String类实现了接口Serializable【作用:String可以串行化,可以在网络传输】,接口Comparable【String对象可以比较大小】;(5)String是final类,不可以被其他的类继承;(6)String有属性privatefinalcharvalue[];用于存放字符串内容。【这里的final指的是:value的地址不可以修改,但是内容可以修改】理解:value是char类型的数组引用,它的final即是不可以指向别的char数组,即地址不可以修改。【创建String对象的两种方式】方式一:直接赋值Strings="hspteacher";双引号包裹的是字符串常量。方式二:调用构造器Strings=newString(hspteacher);两种方式的区别:方式一:先从常量池查看是否有"hspteacher"数据空间,如果有,直接指向;如果没有则重新创建,然后指向,s最终指向的是常量池的空间地址;方式二:先在堆中创建空间,里面有value属性,指向常量池的"hspteacher"空间,如果常量池没有"hspteacher",重新创建,如果有,直接通过value指向,s最终指向的是堆中的空间地址,value指向的是常量池的空间地址。相关例题:String的equals方法已重写,是逐个比较value中的字符,即看两字符串的值是否完全相同。【习题】解读:intern方法返回的是该字符串常量池的地址,如果没有,则创建再返回(而newString创建出来的String对象指向的是堆中的,里面的value再指向常量池)【字符串的特性】(1)String是一个final类,代表不可变的字符序列;(2)字符串是不可变的,一个字符串对象一旦被分配,其内容是不可变的。理解:String是final的,value也是final的,即创建了的String对象里的字符串内容是不可以改变的,但是对象引用可以指向其他字符串对象,实现改变字符串内容。相关例题:🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥涉及的知识点:字符串的特性,方法的传参机制!画内存图!输出为:hspandhava【String类的常用方法】重点:看源码

Manthan, Codefest 19 (open for everyone, rated, Div. 1 + Div. 2) F. Bits And Pieces sosdp

F.BitsAndPieces题面Youaregivenanarray𝑎of𝑛integers.Youneedtofindthemaximumvalueof𝑎𝑖|(𝑎𝑗&𝑎𝑘)overalltriplets(𝑖,𝑗,𝑘)suchthat𝑖<𝑗<𝑘.Here&denotesthebitwiseANDoperation,and|denotesthebitwiseORoperation.InputThefirstlineofinputcontainstheinteger𝑛(3≤𝑛≤106),thesizeofthearray𝑎.Nextlinecontains𝑛spaceseparatedintegers𝑎1,𝑎2,...,𝑎𝑛(0≤𝑎𝑖≤2⋅106),representingtheelementsofthearray𝑎.OutputOutputasingleinteger,themaximumvalueoftheexpressiongiveninthestatement.Examplesinput3246outputinput42847output12NoteInthefirstexample,theonlypossibletripletis(1,2,3).Hence,theansweris2|(4&6)=6.Inthesecondexample,thereare4possibletriplets:(1,2,3),valueofwhichis2|(8&4)=2.(1,2,4),valueofwhichis2|(8&7)=2.(1,3,4),valueofwhichis2|(4&7)=6.(2,3,4),valueofwhichis8|(4&7)=12.Themaximumvaluehenceis12.题意给你n个数,然后让你找到三个不同的ijk,使得a[i]|(a[j]&a[k])最大题解比较显然的做法是枚举a[i],然后从高位到地位看最大的(a[j]&a[k])是多少。(a[j]&a[k])这个东西我们单独维护,枚举a[j]的每一位,举个例子比如a[j]是10001,那么我们让cnt[10000],cnt[10001],cnt[00001]都加1;比如00101,我们让cnt[00100]和cnt[00001],cnt[00101]都加上1。然后我们从高位到地位for循环贪心找到cnt[x]>2的最大的即可。这个枚举可以用sosdp来做;也可以比较牛逼的操作每一位来枚举,见解法1。代码#include<bits/stdc++.h>usingnamespacestd;constintN=4000005;intn,cnt[N],ans,a[N];voidinsert(intx,inty){if(cnt[x|y]==2)return;if(x==0){cnt[y]++;return;}insert(x&x-1,y|x&-x);insert(x&x-1,y);//x&x-1是取消最小的一位,x&-x是取最小的一位}intmain(){scanf("%d",&n);for(inti=1;i<=n;i++)scanf("%d",&a[i]);for(inti=n;i>=1;i--){if(i+2<=n){intnow=0;for(intj=20;j>=0;j--)if(!((1<<j)&a[i])&&cnt[now|1<<j]==2)now|=1<<j;ans=max(ans,now|a[i]);}insert(a[i],0);}printf("%d\n",ans);return0;}/*sosdp#include<bits/stdc++.h>usingnamespacestd;constintmaxn=1<<21;intdp[maxn][21],a[maxn],n;voidsosdp(intnum,intk){if(k>20)return;if(dp[num][k]>1)return;dp[num][k]++;sosdp(num,k+1);if(num>>k&1){sosdp(num^(1<<k),k);}}intmain(){scanf("%d",&n);for(inti=1;i<=n;i++){scanf("%d",&a[i]);}intans=0;for(inti=n;i>=1;i--){intres=0,t=0;for(intj=20;j>=0;j--){if(a[i]>>j&1){res|=1<<j;}elseif(dp[t|(1<<j)][20]>1){res|=1<<j;t|=1<<j;}}sosdp(a[i],0);if(i<=n-2){ans=max(ans,res);}}cout<<ans<<endl;}*/

Python中rsa模块【sign 加签验签】的使用

613594737139603959496839915382306483478406548824172663536589804989)编码结果helloword共钥加密结果(ztJc3d]@d/@LBC14@0-7=teKHuZJgcRuAKk=M9=Ɋ'5:ltFf,qT->UD@解密结果helloword使用二说明:可以直接生成公钥私钥并进行pkcs格式转换#-*-coding:utf-8-*-importrsaimportbase64defge

养猪日记 2021.9.22

Wednesday 雨一直在下小雨,天气又凉了,心情低落。嗓子溃疡,很疼,希望不会像上次冬天那么严重。一天没看到🐖,好想我的🐖。🐖说我昨天乘地铁时心不在焉的,没有理她,才不是呢,我超级想🐖,但下雨了,🐖喜欢宅在房间,我也不好意思折腾🐖出来,可是自己在正心学习,真的好孤单。今天全校几乎都做了核酸,上午检测的都没问题,下午的明天就出结果,希望别出问题。10块收了一本二手西瓜书,有时间看。写了两道leetcode,一道acwing,晚上看了算法课。22:18 正心915

养猪日记 2021.9.26

Sunday 阴上午在正心学习,🐖也在正心上课,可我看不到🐖,中午🐖也和室友去吃饭,我自己吃。写实习报告,上午水了两千多,晚上水了三千,占用了我的刷题时间。下午听了概率论课和计组课。又是到了晚上才看到小🐖,为啥每天和小🐖呆在一起的时间那么短。明天要做第三次核酸啦。晚餐在小🐖的提议下吃了超市的快餐,好几天没吃米饭了,今天吃了点。写了1道leetcode,虽然已经快十一点了,但打卡不能断。22:50 正心915

养猪日记 2021.9.25

Saturday 阴上午做了二次核酸,然后去科学园,组会从一点半开到三点,之后回正心等🐖。直到下午五点才看到🐖,好想🐖。周一交实习报告,要求5000字,上午写了2000,剩下的明天再弄。🐖这一天又做实验又答辩,把小🐖累蒙了,晚上在教室里睡了一个小时。看算法课时,🐖在旁边睡觉,看得我也好困,差点睡着。写了3道leetcode,感觉每天算法题占用了自己好多时间,弄明白那几道算法题,就十点多了,想做别的东西也都来不及。22:16 正心915

养猪日记 2021.10.26

Tuesday 晴上午看了算法课,学了完全背包问题及优化。🐖一点有实验,中午就没和🐖一起吃饭,我自己去吃了麻花。下午上了一节课,课上写了两道leetcode,学习了一下单调栈。计组没去上,效率太低了,有点浪费时间,还是自学进度快。下课之后回到九楼写Qt。晚上和🐖吃的麻辣香锅和米线。回来后写Qt写到九点多点,有点累,开始看项目课,学了静态库和动态库。突然想起忘吃🍊了,和🐖吃完🍊再回宿舍。23:39 正心915

open-capacity-platform开放能力平台发布问题处理

.注意到没有配置文件生效,也许是配置文件有问题。将user-center.yaml应改为application.yml系统才能识别。问题二:Theservertimezonevalue‘й׼ʱ’isunrecognizedorrepresentsmorethanonetimezone问题描述:java.sql.SQLException:Theservertimezonevalue‘й׼ʱ’isunrecognizedorrepresentsmorethanonetimezone.YoumustconfigureeithertheserverorJDBCdriver(vi

gm Error: Command failed: Ч - /data

在其他机器上跑的好好的,但是在某台机器上却遇到gm Error:Commandfailed:Ч-/datanodejs代码如下constgm=require('gm').subClass({imageMagick:true});gm(imagePath).resize(w).write("E:/data/aa.png",function(err){if(err)console.log(err);});7.0版本安装时必须勾选"Installlegacyutilities(e.g.convert)"选项 重新安装,再次运行,问题解决

养猪日记 2021.10.7

Thursday 晴小🐖今天特别难受,希望明天猪猪能好一点~师姐今天分配Qt项目的任务了,明天再做。晚上小🐖不舒服就早点送🐖回宿舍了,我又到诚意学了一会。写了一道leetcode,看了算法课,其他时间都在学Qt。今天白天学习效率比较高,晚上的学习状态不好。22:58 诚意

迭代器

嘿,hxd,你知道什么是迭代器吗?你知道迭代器有什么用吗?我就不知道,然后就有了它。🥳🥳🥳迭代器含义根据官方定义,迭代器就是一个实现了next()函数,并且它返回的是{value,done}的对象,value表示当前next的值,done表示是否到达最底层。迭代器作用个人觉得迭代器作用主要是控制一个按照我们想法的对象生成,然后的话还有就是生成器

Codeforces Round #594 (Div. 2) B. Grow The Tree 水题

B.GrowTheTreeGardenerAlexeyteachescompetitiveprogrammingtohighschoolstudents.TocongratulateAlexeyontheTeacher'sDay,thestudentshavegiftedhimacollectionofwoodensticks,whereeverystickhasanintegerlength.NowAlexeywantstogrowatreefromthem.Thetreelookslikeapolylineontheplane,consistingofallsticks.Thepolylinestartsatthepoint(0,0).Whileconstructingthepolyline,Alexeywillattachstickstoitonebyoneinarbitraryorder.Eachstickmustbeeitherverticalorhorizontal(thatis,parallelto𝑂𝑋or𝑂𝑌axis).Itisnotallowedfortwoconsecutivestickstobealignedsimultaneouslyhorizontallyorsimultaneouslyvertically.Seetheimagesbelowforclarification.Alexeywantstomakeapolylineinsuchawaythatitsendisasfaraspossiblefrom(0,0).Pleasehelphimtogrowthetreethisway.Notethatthepolylinedefiningtheformofthetreemayhaveself-intersectionsandself-touches,butitcanbeprovedthattheoptimalanswerdoesnotcontainanyself-intersectionsorself-touches.InputThefirstlinecontainsaninteger𝑛(1≤𝑛≤100000)—thenumberofsticksAlexeygotasapresent.Thesecondlinecontains𝑛integers𝑎1,…,𝑎𝑛(1≤𝑎𝑖≤10000)—thelengthsofthesticks.OutputPrintoneinteger—thesquareofthelargestpossibledistancefrom(0,0)tothetreeend.Examplesinput3123output26input41122output20NoteThefollowingpicturesshowoptimaltreesforexampletests.Thesquareddistanceinthefirstexampleequals5⋅5+1⋅1=26,andinthesecondexample4⋅4+2⋅2=20.题意给你n个木棍,然后每个木棍的长度为a[i],现在你需要沿着(0,0)这个位置,横着一下,竖着一下摆放;问你最大的距离的平方会是多少题解问题转换为已知a+b=C,问你a2+b2最大是多少。我们令b=(C-a),那么a2+b2=2a2+2ac+c2,这个是一个开口向上,以1/2C为对称轴的二次函数,显然a的取值靠近0,或者靠近C的时候取最大值。也就是要让a和b之间的差值尽量大才行。代码#include<bits/stdc++.h>usingnamespacestd;constintmaxn=1e5+7;longlonga[maxn],sum,sum1;intn;intmain(){cin>>n;for(inti=0;i<n;i++){cin>>a[i];sum+=a[i];}sort(a,a+n);for(inti=0;i<n/2;i++){sum1+=a[i];}cout<<sum1*sum1+(sum-sum1)*(sum-sum1)<<endl;}

养猪日记 2021.11.6

Saturday 阴,降温今天和🐖在正心学了一天~上午看了算法课,下午写了四道leetcode,写会儿作业,晚上看了项目课。晚餐和🐖吃的芹菜炒肉,肘子。肘子挺好吃,就是有点小贵,刚几片就要九块多。今天学习效率不错,明天去科学园写Qt。🐖困啦,回寝~23:19 正心915

养猪日记 2021.11.13

Saturday 晴搬寝室。二公寓人走楼空,一片狼藉,看去不免有些感伤,住了两年多的地方现在像废墟一样。中午又惹🐖生气,反省!🐖今天没那么难受了,晚上吃饭也有胃口啦。🐖买的热水袋到了,和想象中有些不一样哈哈。写了四道leetcode,看了会儿项目课,今天有些累挺。23:28 正心915

养猪日记 2021.11.25

Thursday 晴今天leetcode的签到题是“可怜的小🐖”,哈哈哈,好可爱,可惜有点难,我直接看的答案...今天🐖和我都一整天没课~大多数时间在写开题,已经完成百分之九十五了。写了三道leetcode,看了两集C++课。🐖给我约了明天健身~明天完成开题报告!23:38 正心915

养猪日记 2021.11.26

Friday 晴最近两天都起的挺晚。今天🐖考配位化学,🐖考完挺开心,看起来考得不错。中午健了一会儿身。晚上和🐖吃了鸡爪煲,和肉蟹煲里的鸡爪感觉不一样,没有那么软糯(可能是火候不够?)。写了4道leetcode,开题报告写完了,明早发给师兄,感觉写的挺烂,估计会被说...23:41 正心915

1 2 3 4 5 6 7 8 9 10 下一页