peersim中BT网络核心代码解析

首先大概介绍BT网络运行的整体流程:

开始阶段,一个节点加入到网络中,并向tracker节点发送信息,tracker返回若干个邻居的列表

得到列表后,向每个邻居发送bitfiled信息,来获取他们的文件状态。接着确定需要的piece,并向拥有该

piece的邻居发送关注的请求消息。本地节点根据过去20s内邻居节点的带宽传输表现,选出前3,并把它们置为疏通状态,向他们发送块的请求。

当收到请求信息时,返回一个piece信息,注意如果本地节点上传少于10个块,就把当前请求入队,按队列顺序一个个请求处理,直到上传了10个块。

每当一个节点完成了一个piece的下载,就会给所有邻居发送一个hava信息,表明自己有可以分享的piece

接下来贴上bittorent.java,附有自己添加的注释

   1 /*
   2  * Copyright (c) 2007-2008 Fabrizio Frioli, Michele Pedrolli
   3  *
   4  * This program is free software; you can redistribute it and/or modify
   5  * it under the terms of the GNU Lesser General Public License version 2 as
   6  * published by the Free Software Foundation.
   7  *
   8  * This program is distributed in the hope that it will be useful,
   9  * but WITHOUT ANY WARRANTY; without even the implied warranty of
  10  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  11  * GNU Lesser General Public License for more details.
  12  *
  13  * You should have received a copy of the GNU Lesser General Public License
  14  * along with this program; if not, write to the Free Software
  15  * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
  16  *
  17  * --
  18  *
  19  * Please send your questions/suggestions to:
  20  * {fabrizio.frioli, michele.pedrolli} at studenti dot unitn dot it
  21  *
  22  */
  23 
  24 package peersim.bittorrent;
  25 
  26 import peersim.core.*;
  27 import peersim.config.*;
  28 import peersim.edsim.*;
  29 import peersim.transport.*;
  30 
  31 /**
  32  *    This is the class that implements the BitTorrent module for Peersim
  33  */
  34 public class BitTorrent implements EDProtocol {
  35     //用于配置文件接受数据的字符串
  36     /**
  37      *    The size in Megabytes of the file being shared.
  38      *    @config
  39      */
  40     private static final String PAR_SIZE="file_size";
  41     /**
  42      *    The Transport used by the the protocol.
  43      *    @config
  44      */
  45     private static final String PAR_TRANSPORT="transport";
  46     /**
  47      *    The maximum number of neighbor that a node can have. 
  48      *    @config
  49      */
  50     private static final String PAR_SWARM="max_swarm_size";
  51     /**
  52      *    The maximum number of peers returned by the tracker when a new
  53      *    set of peers is requested through a <tt>TRACKER</tt> message.
  54      *    @config
  55      */
  56     private static final String PAR_PEERSET_SIZE="peerset_size";
  57     /**
  58      *    Defines how much the network can grow with respect to the <tt>network.size</tt> 
  59      *  when {@link NetworkDynamics} is used.
  60      *    @config
  61      */
  62     private static final String PAR_MAX_GROWTH="max_growth";
  63     /**
  64      *    Is the number of requests of the same block sent to different peers.
  65      *    @config
  66      */
  67     private static final String PAR_DUP_REQ = "duplicated_requests";
  68 
  69     //16种事件的代号定义
  70     
  71     /**
  72      *    KEEP_ALIVE message.
  73      *    @see SimpleEvent#type "Event types"
  74      */
  75     private static final int KEEP_ALIVE = 1;
  76     
  77     /**
  78      *    CHOKE message.
  79      *    @see SimpleEvent#type "Event types"
  80      */
  81     private static final int CHOKE = 2;
  82     
  83     /**
  84      *    UNCHOKE message.
  85      *    @see SimpleEvent#type "Event types"
  86      */
  87     private static final int UNCHOKE = 3;
  88     
  89     /**
  90      *    INTERESTED message.
  91      *    @see SimpleEvent#type "Event types"
  92      */
  93     private static final int INTERESTED = 4;
  94     
  95     /**
  96      *    NOT_INTERESTED message.
  97      *    @see SimpleEvent#type "Event types"
  98      */
  99     private static final int NOT_INTERESTED = 5;
 100     
 101     /**
 102      *    HAVE message.
 103      *    @see SimpleEvent#type "Event types"
 104      */
 105     private static final int HAVE = 6;
 106     
 107     /**
 108      *    BITFIELD message.
 109      *    @see SimpleEvent#type "Event types"
 110      */
 111     private static final int BITFIELD = 7;
 112     
 113     /**
 114      *    REQUEST message.
 115      *    @see SimpleEvent#type "Event types"
 116      */
 117     private static final int REQUEST = 8;
 118     
 119     /**
 120      *    PIECE message.
 121      *    @see SimpleEvent#type "Event types"
 122      */    
 123     private static final int PIECE = 9;
 124 
 125     /**
 126      *    CANCEL message.
 127      *    @see SimpleEvent#type "Event types"
 128      */    
 129     private static final int CANCEL = 10;
 130     
 131     /**
 132      *    TRACKER message.
 133      *    @see SimpleEvent#type "Event types"
 134      */    
 135     private static final int TRACKER = 11;
 136     
 137     /**
 138      *    PEERSET message.
 139      *    @see SimpleEvent#type "Event types"
 140      */    
 141     private static final int PEERSET = 12;
 142     
 143     /**
 144      *    CHOKE_TIME event.
 145      *    @see SimpleEvent#type "Event types"
 146      */    
 147     private static final int CHOKE_TIME = 13;
 148     
 149     /**
 150      *    OPTUNCHK_TIME event.
 151      *    @see SimpleEvent#type "Event types"
 152      */    
 153     private static final int OPTUNCHK_TIME = 14;
 154     
 155     /**
 156      *    ANTISNUB_TIME event.
 157      *    @see SimpleEvent#type "Event types"
 158      */    
 159     private static final int ANTISNUB_TIME = 15;
 160     
 161     /**
 162      *    CHECKALIVE_TIME event.
 163      *    @see SimpleEvent#type "Event types"
 164      */    
 165     private static final int CHECKALIVE_TIME = 16;
 166     
 167     /**
 168      *    TRACKERALIVE_TIME event.
 169      *    @see SimpleEvent#type "Event types"
 170      */    
 171     private static final int TRACKERALIVE_TIME = 17;
 172     
 173     /**
 174      *    DOWNLOAD_COMPLETED event.
 175      *    @see SimpleEvent#type "Event types"
 176      */    
 177     private static final int DOWNLOAD_COMPLETED = 18;
 178 
 179     //一大堆的变量初始化定义,要仔细看,不记得回头查一下
 180 
 181     /**
 182      *    The maxium connection speed of the local node.
 183      */
 184     int maxBandwidth;   //本地节点的最大带宽(连接速度)
 185     
 186     /**
 187      *    Stores the neighbors ordered by ID.
 188      *  @see Element
 189      */
 190     private peersim.bittorrent.Element byPeer[];    //按ID存储的邻居节点组
 191     
 192     /**
 193      *    Contains the neighbors ordered by bandwidth as needed by the unchocking
 194      *    algorithm.
 195      */
 196     private peersim.bittorrent.Element byBandwidth[]; //按带宽存储的邻居节点组
 197     
 198     /**
 199      *    The Neighbors list.
 200      */
 201     private Neighbor cache[];          //邻居节点列表,很常用
 202     
 203     /**
 204      *    Reference to the neighbors that unchocked the local node.
 205      */
 206     private boolean unchokedBy[];       //对本地疏通的节点组
 207     /**
 208      *    Number of neighbors in the cache. When it decreases under 20, a new peerset
 209      *    is requested to the tracker.
 210      */
 211     private int nNodes = 0;         //邻居节点列表中的数量,降到20以下,需要重新向TRACKER发请求
 212     
 213     /**
 214      *    Maximum number of nodes in the network.
 215      */
 216     private int nMaxNodes;          //网络中最大节点数
 217     
 218     /**
 219      *    The status of the local peer. 0 means that the current peer is a leecher, 1 a seeder.
 220      */ 
 221     private int peerStatus;        //节点状态,0是非种子节点,1是种子
 222     
 223     /**
 224      *    Defines how much the network can grow with respect to the <tt>network.size</tt> 
 225      *  when {@link NetworkDynamics} is used.
 226      */
 227     public int maxGrowth;
 228     
 229     /**
 230      *    File status of the local node. Contains the blocks owned by the local node.
 231      */
 232     private int status[];   //本地节点拥有的块组
 233     
 234     /**
 235      *    Current number of Bitfield request sent. It must be taken into account 
 236      *    before sending another one.
 237      */
 238     private int nBitfieldSent = 0;   
 239     
 240     /**
 241      *    Current number of pieces in upload from the local peer.
 242      */
 243     public int nPiecesUp = 0;
 244     /**
 245      *    Current number of pieces in download to the local peer.
 246      */
 247     public int nPiecesDown = 0;
 248     
 249     /**
 250      *    Current number of piece completed.
 251      */
 252     private int nPieceCompleted = 0;
 253     
 254     /**
 255      *    Current downloading piece ID, the previous lastInterested piece.
 256      */
 257     int currentPiece = -1;    //正在下载的piece的ID
 258     
 259     /**
 260      *    Used to compute the average download rates in choking algorithm. Stores the
 261      *    number of <tt>CHOKE</tt> events.
 262      */
 263     int n_choke_time = 0;
 264     
 265     /**
 266      *    Used to send the <tt>TRACKER</tt> message when the local node has 20 neighbors
 267      *    for the first time.
 268      */
 269     boolean lock = false;
 270     
 271     /**
 272      *    Number of peers interested to my pieces.
 273      */
 274     int numInterestedPeers = 0;
 275     
 276     /**
 277      *    Last piece for which the local node sent an <tt>INTERESTED</tt> message.
 278      */
 279     int lastInterested = -1;
 280     
 281     /** 
 282      *    The status of the current piece in download. Length 16, every time the local node
 283      *    receives a PIECE message, it updates the corrisponding block's cell. The cell
 284      *    contains the ID for that block of that piece. If an already owned
 285      *    block is received this is discarded.
 286      */
 287     private int pieceStatus[];
 288     
 289     /**    
 290      *    Length of the file. Stored as number of pieces (256KB each one).
 291      */
 292     int nPieces;   //文件长度,有几个PIECE
 293     
 294     /**
 295      *    Contains the neighbors's status of the file. Every row represents a
 296      *    node and every a cell has value O if the neighbor doesn't 
 297      *    have the piece, 1 otherwise. It has {@link #swarmSize} rows and {@link #nPieces}
 298      *    columns.
 299      */
 300     int [][]swarm;  //节点的文件状态组,行代表每个节点,列代表每个piece
 301     
 302     /**    
 303      *    The summation of the swarm's rows. Calculated every time a {@link #BITFIELD} message
 304      *    is received and updated every time HAVE message is received.
 305      */
 306     int rarestPieceSet[];     //最少优先集合
 307     
 308     /**
 309      *    The five pending block requests.
 310      */
 311     int pendingRequest[];      //待处理组
 312     
 313     /**
 314      *    The maximum swarm size (default is 80)
 315      */
 316     int swarmSize;     
 317     
 318     /**
 319      *    The size of the peerset. This is the number of "friends" nodes
 320      *    sent from the tracker to each new node (default: 50)
 321      */
 322     int peersetSize;   
 323     
 324     /**
 325      * The ID of the current node
 326      */
 327     private long thisNodeID;
 328     
 329     /**
 330      *    Number of duplicated requests as specified in the configuration file.
 331      *    @see BitTorrent#PAR_DUP_REQ
 332      */
 333     private int numberOfDuplicatedRequests;
 334     
 335     /**
 336      *    The queue where the requests to serve are stored.
 337      *    The default dimension of the queue is 20.
 338      */
 339     Queue requestToServe = null;     
 340     
 341     /**
 342      *    The queue where the out of sequence incoming pieces are stored
 343      *    waiting for the right moment to be processed.
 344      *    The default dimension of the queue is 100.
 345      */
 346     Queue incomingPieces = null;
 347     
 348     /**
 349      *    The Transport ID.
 350      *    @see BitTorrent#PAR_TRANSPORT
 351      */
 352     int tid;
 353     
 354     /**
 355      *    The reference to the tracker node. If equals to <tt>null</tt>, the local
 356      *    node is the tracker.
 357      */
 358     private Node tracker = null;
 359     
 360     /**
 361      *    The default constructor. Reads the configuration file and initializes the
 362      *    configuration parameters.
 363      *    @param prefix the component prefix declared in the configuration file
 364      */
 365     public BitTorrent(String prefix){ // Used for the tracker's protocol
 366         tid = Configuration.getPid(prefix+"."+PAR_TRANSPORT);
 367         nPieces = (int)((Configuration.getInt(prefix+"."+PAR_SIZE))*1000000/256000);
 368         swarmSize = (int)Configuration.getInt(prefix+"."+PAR_SWARM);
 369         peersetSize = (int)Configuration.getInt(prefix+"."+PAR_PEERSET_SIZE);
 370         numberOfDuplicatedRequests = (int)Configuration.getInt(prefix+"."+PAR_DUP_REQ);
 371         maxGrowth = (int)Configuration.getInt(prefix+"."+PAR_MAX_GROWTH);
 372         nMaxNodes = Network.getCapacity()-1;
 373     }
 374     
 375     /**
 376      *    Gets the reference to the tracker node.
 377      *    @return the reference to the tracker
 378      */
 379     public Node getTracker(){
 380         return tracker;
 381     }
 382     
 383     /**
 384      *    Gets the number of neighbors currently stored in the cache of the local node.
 385      *    @return the number of neighbors in the cache
 386      */
 387     public int getNNodes(){
 388         return this.nNodes;
 389     }
 390     
 391     /**
 392      *    Sets the reference to the tracker node.
 393      *    @param t the tracker node
 394      */
 395     public void setTracker(Node t){
 396         tracker = t;
 397     }
 398     
 399     /**
 400      *    Sets the ID of the local node.
 401      *    @param id the ID of the node
 402      */
 403     public void setThisNodeID(long id) {
 404         this.thisNodeID = id;
 405     }
 406     
 407     /**
 408      *    Gets the ID of the local node.
 409      *    @return the ID of the local node
 410      */
 411     public long getThisNodeID(){
 412         return this.thisNodeID;
 413     }
 414     
 415     /**
 416      *    Gets the file status of the local node.
 417      *    @return the file status of the local node
 418      */
 419     public int[] getFileStatus(){
 420         return this.status;
 421     }
 422     
 423     /**
 424      *    Initializes the tracker node. This method
 425      *    only performs the initialization of the tracker's cache.
 426      */
 427     public void initializeTracker() {
 428         cache = new Neighbor[nMaxNodes+maxGrowth];
 429         for(int i=0; i<nMaxNodes+maxGrowth; i++){
 430             cache[i]= new Neighbor();
 431         }
 432     }
 433     
 434     /**
 435      *    <p>Checks the number of neighbors and if it is equal to 20
 436      *    sends a TRACKER messages to the tracker, asking for a new
 437      *    peer set.</p>
 438      *
 439      *    <p>This method *must* be called after every call of {@link #removeNeighbor}
 440      *    in {@link #processEvent}.
 441      *    </p>
 442      */
 443     private void processNeighborListSize(Node node, int pid) {
 444         if (nNodes==20) {
 445             Object ev;
 446             long latency;
 447             ev = new SimpleMsg(TRACKER, node);
 448             Node tracker = ((BitTorrent)node.getProtocol(pid)).tracker;
 449             if(tracker != null){
 450                 latency = ((Transport)node.getProtocol(tid)).getLatency(node, tracker);
 451                 EDSimulator.add(latency,ev,tracker,pid);
 452             }
 453         }
 454     }
 455     
 456     /**
 457      *    The standard method that processes incoming events.
 458      *    @param node reference to the local node for which the event is going to be processed
 459      *    @param pid BitTorrent's protocol id
 460      *    @param event the event to process
 461      */
 462     public void processEvent(Node node, int pid, Object event){  //核心函数,处理16种消息和事件,对照手册查看功能
 463         
 464         Object ev;
 465         long latency;
 466         switch(((SimpleEvent)event).getType()){
 467             
 468             case KEEP_ALIVE: // 1
 469             {
 470                 Node sender = ((IntMsg)event).getSender();
 471                 int isResponse = ((IntMsg)event).getInt();
 472                 //System.out.println("process, keep_alive: sender is "+sender.getID()+", local is "+node.getID());
 473                 Element e = search(sender.getID());
 474                 if(e!= null){ //if I know the sender
 475                     cache[e.peer].isAlive();
 476                     if(isResponse==0 && alive(sender)){
 477                         Object msg = new IntMsg(KEEP_ALIVE,node,1);
 478                         latency = ((Transport)node.getProtocol(tid)).getLatency(node, sender);
 479                         EDSimulator.add(latency,msg,sender,pid);
 480                         cache[e.peer].justSent();
 481                     }
 482                 }
 483                 else{
 484                     System.err.println("despite it should never happen, it happened");
 485                     ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
 486                     latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 487                     EDSimulator.add(latency,ev,sender,pid);
 488                     nBitfieldSent++;
 489                 }
 490                 
 491             };break;
 492                 
 493             case CHOKE: // 2, CHOKE message.
 494             {
 495                 Node sender = ((SimpleMsg)event).getSender();
 496                 //System.out.println("process, choke: sender is "+sender.getID()+", local is "+node.getID());
 497                 Element e = search(sender.getID());
 498                 if(e!= null){ //if I know the sender
 499                     cache[e.peer].isAlive();
 500                     unchokedBy[e.peer]= false; // I'm choked by it
 501                 }
 502                 else{
 503                     System.err.println("despite it should never happen, it happened");
 504                     ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
 505                     latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 506                     EDSimulator.add(latency,ev,sender,pid);
 507                     nBitfieldSent++;
 508                 }
 509             };break;
 510                 
 511             case UNCHOKE: // 3, UNCHOKE message.
 512             {
 513                 
 514                 
 515                 
 516                 Node sender = ((SimpleMsg)event).getSender();
 517                 //System.out.println("process, unchoke: sender is "+sender.getID()+", local is "+node.getID());
 518                 Element e = search(sender.getID());
 519                 if(e != null){ // If I know the sender
 520                     int senderIndex = e.peer;
 521                     cache[senderIndex].isAlive();
 522                     /* I send to it some of the pending requests not yet satisfied. */
 523                     int t = numberOfDuplicatedRequests;
 524                     for(int i=4;i>=0 && t>0;i--){
 525                         if(pendingRequest[i]==-1)
 526                             break;
 527                         if(alive(cache[senderIndex].node) && swarm[senderIndex][decode(pendingRequest[i],0)]==1){ //If the sender has that piece
 528                             ev = new IntMsg(REQUEST, node,pendingRequest[i] );
 529                             latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 530                             EDSimulator.add(latency,ev, sender,pid);
 531                             cache[senderIndex].justSent();
 532                         }
 533                         if(!alive(cache[senderIndex].node)){
 534                             System.out.println("unchoke1 rm neigh "+ cache[i].node.getID() );
 535                             removeNeighbor(cache[senderIndex].node);
 536                             processNeighborListSize(node,pid);
 537                             return;
 538                         }
 539                         t--;
 540                     }
 541                     // I request missing blocks to fill the queue
 542                     int block = getBlock();
 543                     int piece;
 544                     while(block != -2){ //while still available request to send
 545                         if(block < 0){ // No more block to request for the current piece 
 546                             piece = getPiece();
 547                             if(piece == -1){ // no more piece to request
 548                                 break;
 549                             }
 550                             for(int j=0; j<swarmSize; j++){// send the interested message to those  
 551                                                     // nodes which have that piece
 552                                 lastInterested = piece;
 553                                 if(alive(cache[j].node) && swarm[j][piece]==1){
 554                                     
 555                                     ev = new IntMsg(INTERESTED, node, lastInterested);
 556                                     latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[j].node);
 557                                     EDSimulator.add(latency,ev,cache[j].node,pid);    
 558                                     cache[j].justSent();
 559                                 }
 560                                 
 561                                 if(!alive(cache[j].node)){
 562                                     //System.out.println("unchoke2 rm neigh "+ cache[j].node.getID() );
 563                                     removeNeighbor(cache[j].node);
 564                                     processNeighborListSize(node,pid);
 565                                 }
 566                             }
 567                             block = getBlock();
 568                         }
 569                         else{ // block value referred to a real block
 570                             if(alive(cache[senderIndex].node) && swarm[senderIndex][decode(block,0)]==1 && addRequest(block)){ // The sender has that block
 571                                 ev = new IntMsg(REQUEST, node, block);
 572                                 latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 573                                 EDSimulator.add(latency,ev,sender,pid);
 574                                 cache[senderIndex].justSent();
 575                             }
 576                             else{
 577                                 if(!alive(cache[senderIndex].node)){
 578                                     System.out.println("unchoke3 rm neigh "+ cache[senderIndex].node.getID() );
 579                                     removeNeighbor(cache[senderIndex].node);
 580                                     processNeighborListSize(node,pid);
 581                                 }
 582                                 return;
 583                             }
 584                             block = getBlock();
 585                         }
 586                     }
 587                     unchokedBy[senderIndex] = true; // I add the sender to the list
 588                 }
 589                 else // It should never happen.
 590                 {
 591                     System.err.println("despite it should never happen, it happened");
 592                     for(int i=0; i<swarmSize; i++)
 593                         if(cache[i].node !=null)
 594                             System.err.println(cache[i].node.getID());
 595                     ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
 596                     latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 597                     EDSimulator.add(latency,ev,sender,pid);
 598                     nBitfieldSent++;
 599                 }
 600             };break;
 601                 
 602             case INTERESTED: // 4, INTERESTED message.
 603             {
 604                 numInterestedPeers++;
 605                 Node sender = ((IntMsg)event).getSender();
 606                 //System.out.println("process, interested: sender is "+sender.getID()+", local is "+node.getID());
 607                 int value = ((IntMsg)event).getInt();
 608                 Element e = search(sender.getID());
 609                 if(e!=null){
 610                     cache[e.peer].isAlive();
 611                     cache[e.peer].interested = value;
 612                 }
 613                 else{
 614                     System.err.println("despite it should never happen, it happened");
 615                     ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
 616                     latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 617                     EDSimulator.add(latency,ev,sender,pid);
 618                     nBitfieldSent++;
 619                 }
 620                 
 621             }; break;
 622                 
 623             case NOT_INTERESTED: // 5, NOT_INTERESTED message.
 624             {
 625                 numInterestedPeers--;
 626                 Node sender = ((IntMsg)event).getSender();
 627                 //System.out.println("process, not_interested: sender is "+sender.getID()+", local is "+node.getID());
 628                 int value = ((IntMsg)event).getInt();
 629                 Element e = search(sender.getID());
 630                 if(e!=null){
 631                     cache[e.peer].isAlive();
 632                     if(cache[e.peer].interested == value)
 633                         cache[e.peer].interested = -1; // not interested
 634                 }
 635             }; break;
 636                 
 637             case HAVE: // 6, HAVE message.
 638             {
 639                 
 640                 
 641                 Node sender = ((IntMsg)event).getSender();
 642                 //System.out.println("process, have: sender is "+sender.getID()+", local is "+node.getID());
 643                 int piece = ((IntMsg)event).getInt();
 644                 Element e = search(sender.getID());
 645                 if(e!=null){
 646                     cache[e.peer].isAlive();
 647                     swarm[e.peer][piece]=1;
 648                     rarestPieceSet[piece]++;
 649                     boolean isSeeder = true;
 650                     for(int i=0; i<nPieces; i++){
 651                         isSeeder = isSeeder && (swarm[e.peer][i]==1);    
 652                     }
 653                     e.isSeeder = isSeeder;
 654                 }
 655                 else{
 656                     System.err.println("despite it should never happen, it happened");
 657                     ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
 658                     latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 659                     EDSimulator.add(latency,ev,sender,pid);
 660                     nBitfieldSent++;
 661                 }
 662             }; break;
 663                 
 664             case BITFIELD: // 7, BITFIELD message
 665             {
 666                 
 667                 
 668                 Node sender = ((BitfieldMsg)event).getSender();
 669                 int []fileStatus = ((BitfieldMsg)event).getArray();
 670                 /*Response with NACK*/
 671                 if(!((BitfieldMsg)event).isRequest && !((BitfieldMsg)event).ack){
 672                     Element e = search(sender.getID());
 673                     if(e == null) // if is a response with nack that follows a request
 674                         nBitfieldSent--;
 675                     // otherwise is a response with ack that follows a duplicate
 676                     // insertion attempt
 677                     //System.out.println("process, bitfield_resp_nack: sender is "+sender.getID()+", local is "+node.getID());
 678                     return;
 679                 }
 680                 /*Request with NACK*/
 681                 if(((BitfieldMsg)event).isRequest && !((BitfieldMsg)event).ack){
 682                     //System.out.println("process, bitfield_req_nack: sender is "+sender.getID()+", local is "+node.getID());
 683                     if(alive(sender)){
 684                         Element e = search(sender.getID());
 685                         ev = new BitfieldMsg(BITFIELD, false, true, node, status, nPieces); //response with ack
 686                         latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 687                         EDSimulator.add(latency,ev,sender,pid);
 688                         cache[e.peer].justSent();
 689                     }
 690                 }
 691                 /*Response with ACK*/
 692                 if(!((BitfieldMsg)event).isRequest && ((BitfieldMsg)event).ack){
 693                     nBitfieldSent--;
 694                     //System.out.println("process, bitfield_resp_ack: sender is "+sender.getID()+", local is "+node.getID());
 695                     if(alive(sender)){
 696                         if(addNeighbor(sender)){
 697                             Element e = search(sender.getID());
 698                             cache[e.peer].isAlive();
 699                             swarm[e.peer] = fileStatus;
 700                             boolean isSeeder = true;
 701                             for(int i=0; i<nPieces; i++){
 702                                 rarestPieceSet[i]+= fileStatus[i];
 703                                 isSeeder = isSeeder && (fileStatus[i]==1);
 704                             }
 705                             e.isSeeder = isSeeder;
 706                             
 707                             if(nNodes==10 && !lock){ // I begin to request pieces
 708                                 lock = true;
 709                                 int piece = getPiece();
 710                                 if(piece == -1)
 711                                     return;
 712                                 lastInterested = piece;
 713                                 currentPiece = lastInterested;
 714                                 ev = new IntMsg(INTERESTED, node, lastInterested);
 715                                 for(int i=0; i<swarmSize; i++){// send the interested message to those  
 716                                                         // nodes which have that piece
 717                                     if(alive(cache[i].node) && swarm[i][piece]==1){
 718                                         
 719                                         latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
 720                                         EDSimulator.add(latency,ev,cache[i].node,pid);
 721                                         cache[i].justSent();
 722                                     }
 723                                 }
 724                                 
 725                             }
 726                             
 727                         }
 728                     }
 729                     else
 730                         System.out.println("Sender "+sender.getID()+" not alive");
 731                 }
 732                 /*Request with ACK*/
 733                 if(((BitfieldMsg)event).isRequest && ((BitfieldMsg)event).ack){
 734                     //System.out.println("process, bitfield_req_ack: sender is "+sender.getID()+", local is "+node.getID());
 735                     if(alive(sender)){
 736                         if(addNeighbor(sender)){
 737                             Element e = search(sender.getID()); 
 738                             cache[e.peer].isAlive();
 739                             swarm[e.peer] = fileStatus;
 740                             boolean isSeeder = true;
 741                             for(int i=0; i<nPieces; i++){
 742                                 rarestPieceSet[i]+= fileStatus[i]; // I update the rarestPieceSet with the pieces of the new node
 743                                 isSeeder = isSeeder && (fileStatus[i]==1); // I check if the new node is a seeder
 744                             }
 745                             e.isSeeder = isSeeder;
 746                             ev = new BitfieldMsg(BITFIELD, false, true, node, status, nPieces); //response with ack
 747                             latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 748                             EDSimulator.add(latency,ev,sender,pid);
 749                             cache[e.peer].justSent();
 750                             if(nNodes==10 && !lock){ // I begin to request pieces
 751                                 int piece = getPiece();
 752                                 if(piece == -1)
 753                                     return;
 754                                 lastInterested = piece;
 755                                 currentPiece = lastInterested;
 756                                 ev = new IntMsg(INTERESTED, node, lastInterested);
 757                                 for(int i=0; i<swarmSize; i++){// send the interested message to those  
 758                                                         // nodes which have that piece
 759                                     if(alive(cache[i].node) && swarm[i][piece]==1){
 760                                         
 761                                         latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
 762                                         EDSimulator.add(latency,ev,cache[i].node,pid);
 763                                         cache[i].justSent();
 764                                     }
 765                                 }
 766                                 
 767                             }
 768                         }
 769                         else {
 770                             Element e;
 771                             if((e = search(sender.getID()))!=null){ // The sender was already in the cache
 772                                 cache[e.peer].isAlive();
 773                                 ev = new BitfieldMsg(BITFIELD, false, true, node, status, nPieces); //response with ack
 774                                 latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 775                                 EDSimulator.add(latency,ev,sender,pid);
 776                                 cache[e.peer].justSent();
 777                             }
 778                             else{ // Was not be possible add the sender (nBitfield+nNodes > swarmSize)
 779                                 ev = new BitfieldMsg(BITFIELD, false, false, node, status, nPieces); //response with nack
 780                                 latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
 781                                 EDSimulator.add(latency,ev,sender,pid);
 782                             }
 783                         }
 784                         
 785                     }
 786                     else
 787                         System.out.println("Sender "+sender.getID()+" not alive");
 788                 }
 789             };break;
 790                 
 791             case REQUEST: // 8, REQUEST message.
 792             {
 793                 Object evnt;
 794                 Node sender = ((IntMsg)event).getSender();
 795                 int value = ((IntMsg)event).getInt();
 796                 Element e;
 797                 BitTorrent senderP;
 798                 int remoteRate;
 799                 int localRate;
 800                 int bandwidth;
 801                 int downloadTime;
 802                 
 803                 e = search(sender.getID());
 804                 if (e==null)
 805                     return;
 806                 cache[e.peer].isAlive();
 807                 
 808                 requestToServe.enqueue(value, sender);
 809                                 
 810                 /*I serve the enqueued requests until 10 uploding pieces or an empty queue*/
 811                 while(!requestToServe.empty() && nPiecesUp <10){ 
 812                     Request req = requestToServe.dequeue();
 813                     e = search(req.sender.getID());
 814                     if(e!=null && alive(req.sender)){
 815                         ev = new IntMsg(PIECE, node, req.id);
 816                         nPiecesUp++;
 817                         e.valueUP++;
 818                         senderP = ((BitTorrent)req.sender.getProtocol(pid));
 819                         senderP.nPiecesDown++;
 820                         remoteRate = senderP.maxBandwidth/(senderP.nPiecesUp + senderP.nPiecesDown);
 821                         localRate = maxBandwidth/(nPiecesUp + nPiecesDown);
 822                         bandwidth = Math.min(remoteRate, localRate);
 823                         downloadTime = ((16*8)/(bandwidth))*1000; // in milliseconds
 824                         latency = ((Transport)node.getProtocol(tid)).getLatency(node,req.sender);
 825                         EDSimulator.add(latency+downloadTime,ev,req.sender,pid);
 826                         cache[e.peer].justSent();
 827                         /*I send to me an event to indicate that the download is completed.
 828                         This prevent that, when the receiver death occurres, my value nPiecesUp
 829                         doesn't decrease.*/
 830                         evnt = new SimpleMsg(DOWNLOAD_COMPLETED, req.sender);
 831                         EDSimulator.add(latency+downloadTime,evnt,node,pid); 
 832                     }
 833                 }
 834             }; break;
 835                 
 836             case PIECE: // 9, PIECE message.
 837             {
 838                 Node sender = ((IntMsg)event).getSender();
 839                 /*    Set the correct value for the local uploading and remote 
 840                 downloading number of pieces */
 841                 nPiecesDown--;
 842                 
 843                 if(peerStatus == 1)// To save CPU cycles
 844                     return;
 845                 //System.out.println("process, piece: sender is "+sender.getID()+", local is "+node.getID());
 846                 Element e = search(sender.getID());
 847                 
 848                 if(e==null){ //I can't accept a piece not wait
 849                     return;
 850                 }
 851                 e.valueDOWN++;
 852                 
 853                 cache[e.peer].isAlive();
 854                 
 855                 int value = ((IntMsg)event).getInt();
 856                 int piece = decode(value,0);
 857                 int block = decode(value,1);
 858                 /* If the block has not been already downloaded and it belongs to
 859                     the current downloading piece.*/
 860                 if(piece == currentPiece && decode(pieceStatus[block],0)!= piece){
 861                     pieceStatus[block] = value;
 862                     status[piece]++;
 863                     removeRequest(value);
 864                     
 865                     requestNextBlocks(node, pid, e.peer);
 866                     
 867                 }else{ // Either a future piece or an owned piece
 868                     if(piece!=currentPiece && status[piece]!=16){ // Piece not owned, will be considered later
 869                         incomingPieces.enqueue(value, sender);
 870                     }
 871                     
 872                 }
 873                 ev = new IntMsg(CANCEL, node, value);
 874                 /* I send a CANCEL to all nodes to which I previously requested the block*/
 875                 for(int i=0; i<swarmSize; i++){ 
 876                     if(alive(cache[i].node) && unchokedBy[i]==true && swarm[i][decode(block,0)]==1 && cache[i].node != sender){
 877                         latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
 878                         EDSimulator.add(latency,ev,cache[i].node,pid);
 879                         cache[i].justSent();
 880                     }
 881                 }
 882                 
 883                 if(status[currentPiece]==16){ // if piece completed, I change the currentPiece to the next wanted                    
 884                     nPieceCompleted++;
 885                     ev = new IntMsg(HAVE, node, currentPiece);
 886                     for(int i=0; i<swarmSize; i++){ // I send the HAVE for the piece
 887                         if(alive(cache[i].node)){
 888                             latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
 889                             EDSimulator.add(latency,ev,cache[i].node,pid);
 890                             cache[i].justSent();
 891                         }
 892                         if(!alive(cache[i].node)){
 893                             //System.out.println("piece3 rm neigh "+ cache[i].node.getID() );
 894                             
 895                             removeNeighbor(cache[i].node);
 896                             processNeighborListSize(node,pid);
 897                         }
 898                     }
 899                     ev = new IntMsg(NOT_INTERESTED, node, currentPiece);
 900                     for(int i=0; i<swarmSize; i++){ // I send the NOT_INTERESTED to which peer I sent an INTERESTED
 901                         if(swarm[i][piece]==1 && alive(cache[i].node)){
 902                             latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
 903                             EDSimulator.add(latency,ev,cache[i].node,pid);
 904                             cache[i].justSent();
 905                         }
 906                         if(!alive(cache[i].node)){
 907                             //System.out.println("piece4 rm neigh "+ cache[i].node.getID() );
 908                             
 909                             removeNeighbor(cache[i].node);
 910                             processNeighborListSize(node,pid);
 911                         }
 912                     }
 913                     if(nPieceCompleted == nPieces){
 914                         System.out.println("FILE COMPLETED for peer "+node.getID());
 915                         this.peerStatus = 1;    
 916                     }
 917                     
 918                     /*    I set the currentPiece to the lastInterested. Then I extract 
 919                         the queued received blocks
 920                         */
 921                     
 922                     currentPiece = lastInterested;
 923                     int m = incomingPieces.dim;
 924                     while(m > 0){ // I process the queue
 925                         m--;
 926                         Request temp = incomingPieces.dequeue();
 927                         int p = decode(temp.id,0); // piece id
 928                         int b = decode(temp.id,1); // block id
 929                         Element s = search(temp.sender.getID());
 930                         if(s==null) // if the node that sent the block in the queue is dead
 931                             continue;
 932                         if(p==currentPiece && decode(pieceStatus[b],0)!= p){
 933                             pieceStatus[b] = temp.id;
 934                             status[p]++;
 935                             removeRequest(temp.id);
 936                             requestNextBlocks(node, pid, s.peer);
 937                         }
 938                         else{ // The piece not currently desired will be moved to the tail
 939                             if(p!= currentPiece) // If not a duplicate block but belongs to another piece
 940                                 incomingPieces.enqueue(temp.id,temp.sender);
 941                             else // duplicate block
 942                                 requestNextBlocks(node, pid, s.peer);
 943                         }
 944                     }
 945                 }
 946             }; break;
 947                 
 948             case CANCEL:
 949             {
 950                 Node sender = ((IntMsg)event).getSender();
 951                 int value = ((IntMsg)event).getInt();
 952                 requestToServe.remove(sender, value);
 953             };break;
 954                 
 955             case PEERSET: // PEERSET message
 956             {
 957                 Node sender = ((PeerSetMsg)event).getSender();
 958                 //System.out.println("process, peerset: sender is "+sender.getID()+", local is "+node.getID());
 959                 Neighbor n[] = ((PeerSetMsg)event).getPeerSet();
 960                 
 961                 for(int i=0; i<peersetSize; i++){
 962                     if( n[i]!=null && alive(n[i].node) && search(n[i].node.getID())==null && nNodes+nBitfieldSent <swarmSize-2) {
 963                         ev = new BitfieldMsg(BITFIELD, true, true, node, status, nPieces);
 964                         latency = ((Transport)node.getProtocol(tid)).getLatency(node,n[i].node);
 965                         EDSimulator.add(latency,ev,n[i].node,pid);
 966                         nBitfieldSent++;
 967                         // Here I should call the Neighbor.justSent(), but here
 968                         // the node is not yet in the cache.
 969                     }
 970                 }
 971             }; break;
 972                 
 973             case TRACKER: // TRACKER message
 974             {
 975                 
 976                 int j=0;
 977                 Node sender = ((SimpleMsg)event).getSender();
 978                 //System.out.println("process, tracker: sender is "+sender.getID()+", local is "+node.getID());
 979                 if(!alive(sender))
 980                     return;
 981                 Neighbor tmp[] = new Neighbor[peersetSize];
 982                 int k=0;
 983                 if(nNodes <= peersetSize){
 984                     for(int i=0; i< nMaxNodes+maxGrowth; i++){
 985                         if(cache[i].node != null && cache[i].node.getID()!= sender.getID()){
 986                             tmp[k]=cache[i];
 987                             k++;
 988                         }
 989                     }
 990                     ev = new PeerSetMsg(PEERSET, tmp, node);
 991                     latency = ((Transport)node.getProtocol(tid)).getLatency(node, sender);
 992                     EDSimulator.add(latency,ev,sender,pid);
 993                     return;
 994                 }
 995                 
 996                 while(j < peersetSize){
 997                     int i = CommonState.r.nextInt(nMaxNodes+maxGrowth);
 998                     for (int z=0; z<j; z++){
 999                         if(cache[i].node==null || tmp[z].node.getID() == cache[i].node.getID() || cache[i].node.getID() == sender.getID()){
1000                             z=0;
1001                             i= CommonState.r.nextInt(nMaxNodes+maxGrowth);
1002                         }
1003                     }
1004                     if(cache[i].node != null){
1005                         tmp[j] = cache[i];
1006                         j++;
1007                     }
1008                 }
1009                 ev = new PeerSetMsg(PEERSET, tmp, node);
1010                 latency = ((Transport)node.getProtocol(tid)).getLatency(node, sender);
1011                 EDSimulator.add(latency,ev,sender,pid);
1012             }; break;
1013                 
1014             case CHOKE_TIME: //Every 10 secs.
1015             {    
1016                 n_choke_time++;
1017                 
1018                 ev = new SimpleEvent(CHOKE_TIME);
1019                 EDSimulator.add(10000,ev,node,pid);
1020                 int j=0;
1021                 /*I copy the interested nodes in the byBandwidth array*/
1022                 for(int i=0;i< swarmSize && byPeer[i].peer != -1; i++){
1023                     if(cache[byPeer[i].peer].interested > 0){
1024                         byBandwidth[j]=byPeer[i]; //shallow copy
1025                         j++;
1026                     }
1027                 }
1028                 
1029                 /*It ensures that in the next 20sec, if there are less nodes interested
1030                     than now, those in surplus will not be ordered. */
1031                 for(;j<swarmSize;j++){
1032                     byBandwidth[j]=null;
1033                 }
1034                 sortByBandwidth();
1035                 int optimistic = 3;
1036                 int luckies[] = new int[3];
1037                 try{ // It takes the first three neighbors
1038                     luckies[0] = byBandwidth[0].peer;
1039                     optimistic--;
1040                     luckies[1] = byBandwidth[1].peer;
1041                     optimistic--;
1042                     luckies[2] = byBandwidth[2].peer;
1043                 }
1044                 catch(NullPointerException e){ // If not enough peer in byBandwidth it chooses the other romdomly 
1045                     for(int z = optimistic; z>0;z--){
1046                         int lucky = CommonState.r.nextInt(nNodes);
1047                         while(cache[byPeer[lucky].peer].status ==1 && alive(cache[byPeer[lucky].peer].node) && 
1048                               cache[byPeer[lucky].peer].interested == 0)// until the lucky peer is already unchoked or not interested
1049                             lucky = CommonState.r.nextInt(nNodes);
1050                         luckies[3-z]= byPeer[lucky].peer;
1051                     }
1052                 }
1053                 for(int i=0; i<swarmSize; i++){ // I perform the chokes and the unchokes
1054                     if((i==luckies[0] || i==luckies[1] || i==luckies[2]) &&  alive(cache[i].node) && cache[i].status != 2){ //the unchokes
1055                         cache[i].status = 1;
1056                         ev = new SimpleMsg(UNCHOKE, node);
1057                         latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[i].node);
1058                         EDSimulator.add(latency,ev,cache[i].node,pid);
1059                         cache[i].justSent();
1060                         //System.out.println("average time, unchoked: "+cache[i].node.getID());
1061                     }
1062                     else{ // the chokes
1063                         if(alive(cache[i].node) && (cache[i].status == 1 || cache[i].status == 2)){
1064                             cache[i].status = 0;
1065                             ev = new SimpleMsg(CHOKE, node);
1066                             latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[i].node);
1067                             EDSimulator.add(latency,ev,cache[i].node,pid);
1068                             cache[i].justSent();
1069                         }
1070                     }
1071                 }
1072                 
1073                 if(n_choke_time%2==0){ //every 20 secs. Used in computing the average download rates
1074                     for(int i=0; i<nNodes; i++){
1075                         if(this.peerStatus == 0){ // I'm a leeacher
1076                             byPeer[i].head20 = byPeer[i].valueDOWN;
1077                         }
1078                         else{
1079                             byPeer[i].head20 = byPeer[i].valueUP;
1080                         }
1081                     }
1082                 }
1083             }; break;
1084                 
1085             case OPTUNCHK_TIME:
1086             {
1087                 
1088                 //System.out.println("process, optunchk_time");
1089                 
1090                 ev = new SimpleEvent(OPTUNCHK_TIME);
1091                 EDSimulator.add(30000,ev,node,pid);
1092                 int lucky = CommonState.r.nextInt(nNodes);
1093                 while(cache[byPeer[lucky].peer].status ==1)// until the lucky peer is already unchoked
1094                     lucky = CommonState.r.nextInt(nNodes);
1095                 if(!alive(cache[byPeer[lucky].peer].node))
1096                     return;
1097                 cache[byPeer[lucky].peer].status = 1;
1098                 Object msg = new SimpleMsg(UNCHOKE,node);
1099                 latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[byPeer[lucky].peer].node);
1100                 EDSimulator.add(latency,msg,cache[byPeer[lucky].peer].node,pid);
1101                 cache[byPeer[lucky].peer].justSent();
1102             }; break;
1103                 
1104             case ANTISNUB_TIME:
1105             {
1106                 if(this.peerStatus == 1) // I'm a seeder, I don't update the event
1107                     return;
1108                 //System.out.println("process, antisnub_time");
1109                 for(int i=0; i<nNodes; i++){
1110                     if(byPeer[i].valueDOWN >0 && (byPeer[i].valueDOWN - byPeer[i].head60)==0){// No blocks downloaded in 1 min
1111                         cache[byPeer[i].peer].status = 2; // I'm snubbed by it
1112                     }
1113                     byPeer[i].head60 = byPeer[i].valueDOWN;
1114                 }
1115                 ev = new SimpleEvent(ANTISNUB_TIME);
1116                 EDSimulator.add(60000,ev,node,pid);
1117                 long time = CommonState.getTime();
1118             }; break;
1119                 
1120             case CHECKALIVE_TIME:
1121             {
1122                 
1123                 //System.out.println("process, checkalive_time");
1124                 
1125                 long now = CommonState.getTime();
1126                 for(int i=0; i<swarmSize; i++){
1127                     /*If are at least 2 minutes (plus 1 sec of tolerance) that
1128                     I don't send anything to it.*/
1129                     if(alive(cache[i].node) && (cache[i].lastSent < (now-121000))){
1130                         Object msg = new IntMsg(KEEP_ALIVE,node,0);
1131                         latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[i].node);
1132                         EDSimulator.add(latency,msg,cache[i].node,pid);
1133                         cache[i].justSent();
1134                     }
1135                     /*If are at least 2 minutes (plus 1 sec of tolerance) that I don't
1136                     receive anything from it though I sent a keepalive 2 minutes ago*/
1137                     else{
1138                         if(cache[i].lastSeen <(now-121000) && cache[i].node != null && cache[i].lastSent < (now-121000)){
1139                             System.out.println("process, checkalive_time, rm neigh " + cache[i].node.getID());
1140                             if(cache[i].node.getIndex() != -1){
1141                                 System.out.println("This should never happen: I remove a node that is not effectively died");
1142                             }
1143                             removeNeighbor(cache[i].node);
1144                             processNeighborListSize(node,pid);
1145                         }
1146                     }
1147                 }
1148                 ev = new SimpleEvent(CHECKALIVE_TIME);
1149                 EDSimulator.add(120000,ev,node,pid);
1150             }; break;
1151                 
1152             case TRACKERALIVE_TIME:
1153             {
1154                 //System.out.println("process, trackeralive_time");
1155                 if(alive(tracker)){
1156                     ev = new SimpleEvent(TRACKERALIVE_TIME);
1157                     EDSimulator.add(1800000,ev,node,pid);
1158                 }
1159                 else
1160                     tracker=null;
1161                 
1162             }; break;
1163                 
1164             case DOWNLOAD_COMPLETED:
1165             {
1166                 nPiecesUp--;
1167             }; break;
1168                 
1169         }
1170     }
1171     
1172     /**
1173      *    Given a piece index and a block index it encodes them in an unique integer value.
1174      *    @param piece the index of the piece to encode.
1175      *    @param block the index of the block to encode.
1176      *    @return the encoding of the piece and the block indexes.
1177      */
1178     private int encode(int piece, int block){  //对piece和blockID编码,例如pieceID是1234,其中第2个块的ID就是123402
1179         return (piece*100)+block;
1180         
1181     }
1182     /** 
1183      *    Returns either the piece or the block that contained in the <tt>value</tt> depending
1184      *    on <tt>part</tt>: 0 means the piece value, 1 the block value.
1185      *    @param value the ID of the block to decode.
1186      *    @param part the information to extract from <tt>value</tt>. 0 means the piece index, 1 the block index.
1187      *    @return the piece or the block index depending about the value of <tt>part</tt>
1188      */
1189     private int decode(int value, int part){       //解码,0返回pieceID,1返回blockID
1190         if (value==-1) // Not a true value to decode
1191             return -1;
1192         if(part == 0) // I'm interested to the piece
1193             return value/100;
1194         else // I'm interested to the block
1195             return value%100;
1196     }
1197     
1198     /**
1199      *    Used by {@link NodeInitializer#choosePieces(int, BitTorrent) NodeInitializer} to set 
1200      *    the number of piece completed from the beginning in according with
1201      *    the distribution in the configuration file.
1202      *    @param number the number of piece completed
1203      */
1204     public void setCompleted(int number){
1205         this.nPieceCompleted = number;
1206     }
1207     
1208     /**
1209      *    Sets the status (the set of blocks) of the file for the current node.
1210      *  Note that a piece is considered <i>completed</i> if the number
1211      *  of downloaded blocks is 16.
1212      *  @param index The index of the piece
1213      *  @param value Number of blocks downloaded for the piece index.
1214      */
1215     public void setStatus(int index, int value){
1216         status[index]=value;
1217     }
1218     
1219     /**
1220      *    Sets the status of the local node.
1221      *    @param status The status of the node: 1 means seeder, 0 leecher
1222      */
1223     public void setPeerStatus(int status){
1224         this.peerStatus = status;
1225     }
1226     
1227     /**
1228      *    Gets the status of the local node.
1229      *    @return The status of the local node: 1 means seeder, 0 leecher
1230      */
1231     public int getPeerStatus(){
1232         return peerStatus;
1233     }
1234     
1235     /**
1236      *  Gets the number of blocks for a given piece owned by the local node.
1237      *    @param index The index of the piece
1238      *    @return Number of blocks downloaded for the piece index
1239      */
1240     public int getStatus(int index){
1241         return status[index];    
1242     }
1243     
1244     /**
1245      *    Sets the maximum bandwdith for the local node.
1246      *    @param value The value of bandwidth in Kbps
1247      */
1248     public void setBandwidth(int value){
1249         maxBandwidth = value;
1250     }
1251     
1252     /**
1253      *    Checks if a node is still alive in the simulated network.
1254      *    @param node The node to check
1255      *    @return true if the node <tt>node</tt> is up, false otherwise
1256      *    @see peersim.core.GeneralNode#isUp
1257      */
1258     public boolean alive(Node node){
1259         if(node == null)
1260             return false;
1261         else
1262             return node.isUp();
1263     }
1264         
1265     /**    
1266      *    Adds a neighbor to the cache of the local node.
1267      *  The new neighbor is put in the first null position.
1268      *    @param neighbor The neighbor node to add
1269      *  @return <tt>false</tt> if the neighbor is already present in the cache (this can happen when the peer requests a
1270      *    new peer set to the tracker an there is still this neighbor within) or no place is available.
1271      *    Otherwise, returns true if the node is correctly added to the cache.
1272      */
1273     public boolean addNeighbor(Node neighbor){
1274         if(search(neighbor.getID()) !=null){// if already exists
1275         //    System.err.println("Node "+neighbor.getID() + " not added, already exist.");
1276             return false;
1277         }
1278         if(this.tracker == null){ // I'm in the tracker's BitTorrent protocol
1279             for(int i=0; i< nMaxNodes+maxGrowth; i++){
1280                 if(cache[i].node == null){
1281                     cache[i].node = neighbor;
1282                     cache[i].status = 0; //chocked
1283                     cache[i].interested = -1; //not interested
1284                     this.nNodes++;
1285                     
1286                     //System.err.println("i: " + i +" nMaxNodes: " + nMaxNodes);
1287                     return true;
1288                 }
1289             }
1290         }
1291         else{
1292             if((nNodes+nBitfieldSent) < swarmSize){
1293                 //System.out.println("I'm the node " + this.thisNodeID + ", trying to add node "+neighbor.getID());
1294                 for(int i=0; i<swarmSize; i++){
1295                     if(cache[i].node == null){
1296                         cache[i].node = neighbor;
1297                         cache[i].status = 0; //choked
1298                         cache[i].interested = -1; // not interested
1299                         byPeer[nNodes].peer = i;
1300                         byPeer[nNodes].ID = neighbor.getID();
1301                         sortByPeer();
1302                         this.nNodes++;
1303                         //System.out.println(neighbor.getID()+" added!");
1304                         return true;
1305                     }
1306                 }
1307                 System.out.println("Node not added, no places available");
1308             }
1309         }
1310         return false;
1311     }
1312     
1313     /**
1314      *    Removes a neighbor from the cache of the local node.
1315      *    @param neighbor The node to remove
1316      *    @return true if the node is correctly removed, false otherwise.
1317      */
1318     public boolean removeNeighbor(Node neighbor) {
1319         
1320         if (neighbor == null)
1321             return true;
1322         
1323         // this is the tracker's bittorrent protocol
1324         if (this.tracker == null) {
1325             for (int i=0; i< (nMaxNodes+maxGrowth); i++) {
1326                 
1327                 // check the feasibility of the removal
1328                 if ( (cache[i] != null) && (cache[i].node != null) &&
1329                      (cache[i].node.getID() == neighbor.getID()) ) {
1330                     cache[i].node = null;
1331                     this.nNodes--;
1332                     return true;
1333                 }
1334             }
1335             return false;
1336         }
1337         // this is the bittorrent protocol of a peer
1338         else {
1339             
1340             Element e = search(neighbor.getID());
1341             
1342             if (e != null) {
1343                 for (int i=0; i<nPieces; i++) {
1344                     rarestPieceSet[i] -= swarm[e.peer][i];
1345                     swarm[e.peer][i] = 0;
1346                 }
1347                 
1348                 cache[e.peer].node = null;
1349                 cache[e.peer].status = 0;
1350                 cache[e.peer].interested = -1;
1351                 unchokedBy[e.peer] = false;
1352                 this.nNodes--;
1353                 e.peer = -1;
1354                 e.ID = Integer.MAX_VALUE;
1355                 e.valueUP = 0;
1356                 e.valueDOWN = 0;
1357                 e.head20 = 0;
1358                 e.head60 = 0;
1359                 sortByPeer();
1360                 
1361                 return true;
1362             }
1363         }
1364         return false;
1365     }
1366     
1367     /**
1368      *    Adds a request to the pendingRequest queue.
1369      *    @param block The requested block
1370      *    @return true if the request has been successfully added to the queue, false otherwise
1371      */
1372     private boolean addRequest(int block){
1373         int i=4;
1374         while(i>=0 && pendingRequest[i]!=-1){
1375             i--;
1376         }
1377         if(i>=0){
1378             pendingRequest[i] = block;
1379             return true;
1380         }
1381         else { // It should never happen
1382                //System.err.println("pendingRequest queue full");
1383             return false;        
1384         }
1385     }
1386     
1387     /**
1388      *    Removes the block with the given <tt>id</tt> from the {@link #pendingRequest} queue
1389      *  and sorts the queue leaving the empty cell at the left.
1390      *    @param id the id of the requested block
1391      */
1392     private void removeRequest(int id){
1393         int i = 4;
1394         for(; i>=0; i--){
1395             if(pendingRequest[i]==id)
1396                 break;
1397         }
1398         for(; i>=0; i--){
1399             if(i==0)
1400                 pendingRequest[i] = -1;
1401             else
1402                 pendingRequest[i] = pendingRequest[i-1];
1403         }
1404     }
1405     
1406     /**
1407      *    Requests new block until the {@link #pendingRequest} is full to the sender of the just received piece.
1408      *    It calls {@link #getNewBlock(Node, int)} to implement the <i>strict priority</i> strategy. 
1409      *    @param node the local node
1410      *    @param pid the BitTorrent protocol id
1411      *    @param sender the sender of the just received received piece. 
1412      */
1413     private void requestNextBlocks(Node node, int pid, int sender){
1414         int block = getNewBlock(node, pid);
1415         while(block != -2){
1416             if(unchokedBy[sender]==true && alive(cache[sender].node) && addRequest(block)){
1417                 Object ev = new IntMsg(REQUEST, node, block);
1418                 long latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[sender].node);
1419                 EDSimulator.add(latency,ev,cache[sender].node,pid);
1420                 cache[sender].justSent();
1421             }
1422             else{ // I cannot send request
1423                 if(!alive(cache[sender].node) && cache[sender].node!=null){
1424                     System.out.println("piece2 rm neigh "+ cache[sender].node.getID() );
1425                     removeNeighbor(cache[sender].node);
1426                     processNeighborListSize(node,pid);
1427                 }
1428                 return;
1429             }
1430             block = getNewBlock(node, pid);
1431         }
1432     }
1433     
1434     /**
1435      *    It returns the id of the next block to request. Sends <tt>INTERESTED</tt> if the new
1436      *    block belongs to a new piece.
1437      *    It uses {@link #getBlock()} to get the next block of a piece and calls {@link #getPiece()}
1438      *    when all the blocks for the {@link #currentPiece} have been requested.
1439      *    @param node the local node
1440      *    @param pid the BitTorrent protocol id
1441      *    @return -2 if no more places available in the <tt>pendingRequest</tt> queue;<br/>
1442      *            the value of the next block to request otherwise</p>
1443      */
1444     private int getNewBlock(Node node, int pid){ //返回下一个请求的块,包括了队列满等情况,申请新的piece,返回对应的块
1445         int block = getBlock();
1446         if(block < 0){ // No more block to request for the current piece 
1447             
1448             if(block ==-2) // Pending request queue full
1449                 return -2;
1450             
1451             int newPiece = getPiece();
1452             if(newPiece == -1){ // no more piece to request
1453                 return -2;
1454             }
1455             
1456             lastInterested = newPiece;
1457             Object ev = new IntMsg(INTERESTED, node, lastInterested);
1458             
1459             for(int j=0; j<swarmSize; j++){// send the interested message to those  
1460                                     // nodes which have that piece
1461                 if(alive(cache[j].node) && swarm[j][newPiece]==1){
1462                     long latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[j].node);
1463                     EDSimulator.add(latency,ev,cache[j].node,pid);
1464                     cache[j].justSent();
1465                 }
1466                 if(!alive(cache[j].node)){
1467                     //System.out.println("piece1 rm neigh "+ cache[j].node.getID() );
1468                     
1469                     removeNeighbor(cache[j].node);
1470                     processNeighborListSize(node,pid);
1471                 }
1472             }
1473             block = getBlock();
1474             return block;
1475         }
1476         else{
1477             // block value referred to a real block
1478             return block;
1479         }
1480     }
1481     
1482     /**
1483      *    Returns the next block to request for the {@link #currentPiece}.
1484      *    @return an index of a block of the <tt>currentPiece</tt> if there are still 
1485      *            available places in the {@link #pendingRequest} queue;<br/>
1486      *            -2 if the <tt>pendingRequest</tt> queue is full;<br/>
1487      *            -1 if no more blocks to request for the current piece. 
1488      */
1489     private int getBlock(){  //返回请求的块,返回-2说明待处理队列满,-1说明当前piece没有需要下的块
1490         int i=4;
1491         while(i>=0 && pendingRequest[i]!=-1){ // i is the first empty position from the head
1492             i--;
1493         }
1494         if(i==-1){// No places in the pendingRequest available
1495                   //System.out.println("Pendig request queue full!");
1496             return -2;
1497         }
1498         int j;
1499         //The queue is not empty & last requested block belongs to lastInterested piece 
1500         if(i!=4 && decode(pendingRequest[i+1],0)==lastInterested)
1501             j=decode(pendingRequest[i+1],1)+1; // the block following the last requested
1502         else // I don't know which is the next block, so I search it.
1503             j=0; 
1504         /*    I search another block until the current has been already received. 
1505             *    If in pieceStatus at position j there is a block that belongs to
1506             *    lastInterested piece, means that the block j has been already
1507             *    received, otherwise I can request it.
1508             */
1509         while(j<16 && decode(pieceStatus[j],0)==lastInterested){
1510             j++;
1511         }
1512         if(j==16) // No more block to request for lastInterested piece
1513             return -1;
1514         return encode(lastInterested,j);
1515     }
1516     
1517     /**
1518      *    Returns the next correct piece to download. It choose the piece by using the
1519      *    <i>random first</i> and <i>rarest first</i> policy. For the beginning 4 pieces
1520      *    of a file the first one is used then the pieces are chosen using <i>rarest first</i>.
1521      *    @see "Documentation about the BitTorrent module"
1522      *    @return the next piece to download. If the whole file has been requested
1523      *    -1 is returned.
1524      */
1525     private int getPiece(){            //返回下一个需要下载的合适的piece(随机优先和最少优先两种算法)
1526         int piece = -1;
1527         if(nPieceCompleted < 4){ //Uses random first piece      //一开始,使用随机优先,随机选择piece,先下载一个完整的piece再说
1528             piece = CommonState.r.nextInt(nPieces);
1529             while(status[piece]==16 || piece == currentPiece) // until the piece is owned
1530                 piece = CommonState.r.nextInt(nPieces);
1531             return piece;
1532         }
1533         else{ //Uses rarest piece first   //否则就说明已经拥有完整的piece了,就可以使用最少优先算法
1534             int j=0;
1535             for(; j<nPieces; j++){ // I find the first not owned piece
1536                 if(status[j]==0){
1537                     piece = j;
1538                     if(piece != lastInterested) // teoretically it works because
1539                                                 // there should be only one interested 
1540                                                 // piece not yet downloaded
1541                         break;
1542                 }
1543             }
1544             if(piece==-1){ // Never entered in the previous 'if' statement; for all
1545                            // pieces an has been sent
1546                 return -1;
1547             }
1548             
1549             int rarestPieces[] = new int[nPieces-j]; // the pieces with the less number of occurences
1550             rarestPieces[0] = j;
1551             int nValues = 1; // number of pieces less distributed in the network
1552             for(int i=j+1; i<nPieces; i++){ // Finds the rarest piece not owned
1553                 if(rarestPieceSet[i]< rarestPieceSet[rarestPieces[0]] && status[i]==0){ // if strictly less than the current one
1554                     rarestPieces[0] = i; 
1555                     nValues = 1;
1556                 }
1557                 if(rarestPieceSet[i]==rarestPieceSet[rarestPieces[0]] && status[i]==0){ // if equal
1558                     rarestPieces[nValues] = i;
1559                     nValues++;
1560                 }
1561             }
1562             
1563             piece = CommonState.r.nextInt(nValues); // one of the less owned pieces
1564             return rarestPieces[piece];
1565         }
1566     }
1567     
1568     /**
1569      *    Returns the file's size as number of pieces of 256KB.
1570      *    @return number of pieces that compose the file.
1571      */
1572     public int getNPieces(){     //返回文件大小(有多少piece)
1573         return nPieces;    
1574     }
1575     /**
1576      *    Clone method of the class. Returns a deep copy of the BitTorrent class. Used
1577      *    by the simulation to initialize the {@link peersim.core.Network}
1578      *    @return the deep copy of the BitTorrent class.
1579      */
1580     public Object clone(){
1581         Object prot = null;
1582         try{
1583             prot = (BitTorrent)super.clone();
1584         }
1585         catch(CloneNotSupportedException e){};
1586         
1587         ((BitTorrent)prot).cache = new Neighbor[swarmSize];
1588         for(int i=0; i<swarmSize; i++){
1589             ((BitTorrent)prot).cache[i] = new Neighbor();
1590         }
1591         
1592         ((BitTorrent)prot).byPeer = new Element[swarmSize];
1593         for(int i=0; i<swarmSize; i++){
1594             ((BitTorrent)prot).byPeer[i] = new Element();
1595         }
1596         
1597         ((BitTorrent)prot).unchokedBy = new boolean[swarmSize];
1598         
1599         ((BitTorrent)prot).byBandwidth = new Element[swarmSize];
1600         ((BitTorrent)prot).status = new int[nPieces];
1601         ((BitTorrent)prot).pieceStatus = new int[16];
1602         for(int i=0; i<16;i++)
1603             ((BitTorrent)prot).pieceStatus[i] = -1;
1604         ((BitTorrent)prot).pendingRequest = new int[5];
1605         for(int i=0; i<5;i++)
1606             ((BitTorrent)prot).pendingRequest[i] = -1;
1607         ((BitTorrent)prot).rarestPieceSet = new int[nPieces];
1608         for(int i=0; i<nPieces;i++)
1609             ((BitTorrent)prot).rarestPieceSet[i] = 0;
1610         ((BitTorrent)prot).swarm = new int[swarmSize][nPieces];
1611         ((BitTorrent)prot).requestToServe = new Queue(20);
1612         ((BitTorrent)prot).incomingPieces = new Queue(100);
1613         return prot;
1614     }
1615     
1616     /**
1617      *    Sorts {@link #byPeer} array by peer's ID. It implements the <i>InsertionSort</i>
1618      *    algorithm. 
1619      */
1620     public void sortByPeer(){    //按节点顺序排序,插入排序算法
1621         int i;
1622         
1623         for(int j=1; j<swarmSize; j++)   // out is dividing line
1624         {
1625             Element key = new Element(); 
1626             byPeer[j].copyTo(key) ;    // remove marked item
1627             i = j-1;           // start shifts at out
1628             while(i>=0 && (byPeer[i].ID > key.ID)) // until one is smaller,
1629             {
1630                 byPeer[i].copyTo(byPeer[i+1]);      // shift item right,
1631                 i--;            // go left one position
1632             }
1633             key.copyTo(byPeer[i+1]);         // insert marked item
1634         } 
1635         
1636     }
1637     
1638     /**
1639      *    Sorts the array {@link #byBandwidth} using <i>QuickSort</i> algorithm.
1640      *    <tt>null</tt> elements and seeders are moved to the end of the array.
1641      */
1642     public void sortByBandwidth() {       //按传输带宽给节点排序,使用快排算法
1643         quicksort(0, swarmSize-1);
1644     }
1645     
1646     /**
1647      *    Used by {@link #sortByBandwidth()}. It's the implementation of the
1648      *    <i>QuickSort</i> algorithm.
1649      *    @param left the leftmost index of the array to sort.
1650      *    @param right the rightmost index of the array to sort.
1651      */
1652     private void quicksort(int left, int right) {
1653         if (right <= left) return;
1654         int i = partition(left, right);
1655         quicksort(left, i-1);
1656         quicksort(i+1, right);
1657     }
1658     
1659     /**
1660      *    Used by {@link #quicksort(int, int)}, partitions the subarray to sort returning
1661      *    the splitting point as stated by the <i>QuickSort</i> algorithm.
1662      *    @see "The <i>QuickSort</i> algorithm".
1663      */
1664     private int partition(int left, int right) {        //快排算法的中间函数
1665         int i = left - 1;
1666         int j = right;
1667         while (true) {
1668             while (greater(byBandwidth[++i], byBandwidth[right]))      // find item on left to swap
1669                 ;                               // a[right] acts as sentinel
1670             while (greater(byBandwidth[right], byBandwidth[--j])) {      // find item on right to swap
1671                 if (j == left) break;  // don't go out-of-bounds
1672             }
1673             if (i >= j) break;                  // check if pointers cross
1674             swap(i, j);                      // swap two elements into place
1675         }
1676         swap(i, right);                      // swap with partition element
1677         return i;
1678     }
1679     
1680     /**
1681      *    Aswers to the question "is x > y?". Compares the {@link Element}s given as 
1682      *    parameters. <tt>Element x</tt> is greater than <tt>y</tt> if isn't <tt>null</tt>
1683      *    and in the last 20 seconds the local node has downloaded ("uploaded" if the local node is a 
1684      *    seeder) more blocks than from <tt>y</tt>.
1685      *    @param x the first <tt>Element</tt> to compare.
1686      *    @param y the second <tt>Element</tt> to compare
1687      *    @return <tt>true</tt> if x > y;<br/>
1688      *            <tt>false</tt> otherwise.
1689      */
1690     private boolean greater(Element x, Element y) {     //中间函数,用于比较两个节点的大小,用在排序算法里
1691         /*
1692          * Null elements and seeders are shifted at the end
1693          * of the array
1694          */
1695         if (x==null) return false;
1696         if (y==null) return true;
1697         if (x.isSeeder) return false;
1698         if (y.isSeeder) return true;
1699         
1700         // if the local node is a leecher
1701         if (peerStatus==0) {
1702             if ((x.valueDOWN - x.head20) >
1703                 (y.valueDOWN -y.head20))
1704                 return true;
1705             else return false;
1706         }
1707         
1708         // if peerStatus==1 (the local node is a seeder)
1709         else {            
1710             if ((x.valueUP - x.head20) >
1711                 (y.valueUP -y.head20))
1712                 return true;
1713             else return false;
1714         }
1715     }
1716     
1717     /**
1718      *    Swaps {@link Element} <tt>i</tt> with <tt>j</tt> in the {@link #byBandwidth}.<br/>
1719      *    Used by {@link #partition(int, int)}
1720      *    @param i index of the first element to swap
1721      *    @param j index of the second element to swap
1722      */
1723     private void swap(int i, int j) {      //用下面的查询节点的中间函数
1724         Element swap = byBandwidth[i];
1725         byBandwidth[i] = byBandwidth[j];
1726         byBandwidth[j] = swap;
1727     }
1728     
1729     /**    Searches the node with the given ID. It does a dychotomic 
1730      *    search.
1731      *    @param ID ID of the node to search.
1732      *    @return the {@link Element} in {@link #byPeer} which represents the node with the
1733      *    given ID.
1734      */
1735     public Element search(long ID){               //二分法来查找节点
1736         int low = 0;
1737         int high = swarmSize-1;
1738         int p = low+((high-low)/2);              //Initial probe position
1739         while ( low <= high) {
1740             if ( byPeer[p] == null || byPeer[p].ID > ID)
1741                 high = p - 1;
1742             else { 
1743                 if( byPeer[p].ID < ID )  //Wasteful second comparison forced by syntax limitations.
1744                     low = p + 1;
1745                 else
1746                     return byPeer[p];
1747             }
1748             p = low+((high-low)/2);         //Next probe position.
1749         }
1750         return null;     
1751     }
1752 }
1753 
1754 /**
1755  *    This class is used to store the main informations about a neighbors regarding
1756  *    the calculation of the Downloading/Uploading rates. Is the class of items in 
1757  *    {@link example.bittorrent.BitTorrent#byPeer} and {@link example.bittorrent.BitTorrent#byBandwidth}.
1758  */
1759 class Element{          //用来存储邻居节点的一些关于上传下载速率信息的类
1760     /**
1761      *    ID of the represented node.
1762      */
1763     public long ID = Integer.MAX_VALUE;        //索引,用来查询节点
1764     /**
1765      *    Index position of the node in the {@link example.bittorrent.BitTorrent#cache} array.
1766      */
1767     public int peer = -1; 
1768     /**
1769      *    Number of blocks uploaded to anyone since the beginning.
1770      */
1771     public int valueUP = 0;           //从开始上传过的块的数量
1772     /**
1773      *    Number of blocks downloaded from anyone since the beginning.
1774      */
1775     public int valueDOWN = 0;         //从开始下载过的块的数量
1776     /**
1777      *    Value of either {@link #valueUP} or {@link #valueDOWN} (depending by 
1778      *    {@link example.bittorrent.BitTorrent#peerStatus}) 20 seconds before.
1779      */
1780     public int head20 = 0;           //前20S内上传和下载总的流量,同下面60S前
1781     /**
1782      *    Value of either {@link #valueUP} or {@link #valueDOWN} (depending by 
1783      *    {@link example.bittorrent.BitTorrent#peerStatus}) 60 seconds before.
1784      */
1785     public int head60 = 0; 
1786     /**
1787      *    <tt>true</tt> if the node is a seeder, <tt>false</tt> otherwise.
1788      */
1789     public boolean isSeeder = false;
1790     /**
1791      *    Makes a deep copy of the Element to <tt>destination</tt>
1792      *    @param destination Element instance where to make the copy
1793      */
1794     public void copyTo(Element destination){
1795         destination.ID = this.ID;
1796         destination.peer = this.peer;
1797         destination.valueUP = this.valueUP;
1798         destination.valueDOWN = this.valueDOWN;
1799         destination.head20 = this.head20;
1800         destination.head60 = this.head60;
1801     }
1802 }
1803 
1804 /**
1805  *    This class stores information about the neighbors regarding their status. It is 
1806  *    the type of the items in the {@link example.bittorrent.BitTorrent#cache}.
1807  */
1808 class Neighbor{     //邻居节点类
1809     /**
1810      *    Reference to the node in the {@link peersim.core.Network}.
1811      */
1812     public Node node = null;
1813     /**
1814      *    -1 means not interested<br/>
1815      *    Other values means the last piece number for which the node is interested.
1816      */
1817     public int interested;
1818     /**
1819      *    0 means CHOKED<br/>
1820      *    1 means UNCHOKED<br/>
1821      *    2 means SNUBBED_BY. If this value is set and the node is to be unchocked,
1822      *    value 2 has the priority.
1823      */
1824     public int status;    //记录阻塞,疏通和拒绝状态
1825     /**
1826      *    Last time a message from the node represented has been received.
1827      */
1828     public long lastSeen = 0; 
1829     /**
1830      *    Last time a message to the node represented has been sent.
1831      */
1832     public long lastSent = 0;
1833     
1834     /**
1835      * Sets the last time the neighbor was seen.
1836      */
1837     public void isAlive(){                //更新最后一次接受节点信息的时间
1838         long now = CommonState.getTime();
1839         this.lastSeen = now;
1840     }
1841     
1842     /*
1843      * Sets the last time the local peer sent something to the neighbor.
1844      */
1845     public void justSent(){             //更新最后一次给节点发送信息的时间
1846         long now = CommonState.getTime();
1847         this.lastSent = now;
1848     }
1849     
1850 }
1851 
1852 /**
1853  *    Class type of the queues's items in {@link example.bittorrent.BitTorrent#incomingPieces} 
1854  *    and {@link example.bittorrent.BitTorrent#requestToServe}.
1855  */
1856 class Queue{             //请求的队列类,包含基本的出入队,判空,查询移除功能
1857     int maxSize;
1858     int head = 0;
1859     int tail = 0;
1860     int dim = 0;
1861     Request queue[];
1862     
1863     /**
1864      *    Public constructor. Creates a queue of size <tt>size</tt>.
1865      */
1866     public Queue(int size){
1867         maxSize = size;
1868         queue = new Request[size];
1869         for(int i=0; i< size; i++)
1870             queue[i]= new Request();
1871     }
1872     
1873     /**
1874      *    Enqueues the request of the block <tt>id</tt> and its <tt>sender</tt>
1875      *    @param id the id of the block in the request
1876      *    @param sender a reference to the sender of the request
1877      *    @return <tt>true</tt> if the request has been correctly added, <tt>false</tt>
1878      *    otherwise.
1879      */
1880     public boolean enqueue(int id, Node sender){
1881         if(dim < maxSize){
1882             queue[tail%maxSize].id = id;
1883             queue[tail%maxSize].sender = sender;
1884             tail++;
1885             dim++;
1886             return true;
1887         }
1888         else return false;
1889     }
1890     
1891     /**
1892      *    Returns the {@link Request} in the head of the queue.
1893      *    @return the element in the head.<br/>
1894      *            <tt>null</tt> if the queue is empty.
1895      */
1896     public Request dequeue(){
1897         Request value;
1898         if(dim > 0){
1899             value = queue[head%maxSize];
1900             head++;
1901             dim--;
1902             return value;
1903         }
1904         else return null; //empty queue
1905     }
1906     
1907     /**
1908      *    Returns the status of the queue.
1909      *    @return <tt>true</tt> if the queue is empty, <tt>false</tt>
1910      *    otherwise.
1911      */
1912     public boolean empty(){
1913         return (dim == 0);
1914     }
1915     
1916     /**
1917      *    Returns <tt>true</tt> if block given as parameter is in.
1918      *    @param    value the id of the block to search.
1919      *    @return <tt>true</tt> if the block <tt>value</tt> is in the queue, <tt>false</tt>
1920      *    otherwise.
1921      */
1922     public boolean contains(int value){
1923         if(empty())
1924             return false;
1925         for(int i=head; i<head+dim; i++){
1926             if(queue[i%maxSize].id == value)
1927                 return true;
1928         }
1929         return false;
1930     }
1931     
1932     /**
1933      *    Removes a request from the queue.
1934      *    @param sender the sender of the request.
1935      *    @param value the id of the block requested.
1936      *    @return <tt>true</tt> if the request has been correctly removed, <tt>false</tt>
1937      *    otherwise.
1938      */
1939     public boolean remove(Node sender, int value){
1940         if(empty())
1941             return false;
1942         for(int i=head; i<head+dim; i++){
1943             if(queue[i%maxSize].id == value && queue[i%maxSize].sender == sender){
1944                 for(int j=i; j>head; j--){ // Shifts the elements for the removal
1945                     queue[j%maxSize]= queue[(j-1)%maxSize];
1946                 }
1947                 head++;
1948                 dim--;
1949                 return true;
1950             }
1951         }
1952         return false;
1953     }
1954 }
1955 
1956 /**
1957  *    This class represent an enqueued request of a block.
1958  */
1959 class Request{             //请求类,包括块的ID和请求的发送方
1960         /**
1961      *    The id of the block.
1962      */
1963     public int id;
1964     /**
1965      *    The sender of the request.
1966      */
1967     public Node sender;
1968 }
bittorrent.java

BT网络中涉及到了许多小的算法,例如块的选择,如何确定阻塞疏通状态等,这里附上

下载文件片断的选择(Piece Selection)
选择一个好的顺序来下载片断,对提高性能非常重要。一个差的片断选择算法可能导致所有的片断都处于下载中,或者另一种情况,没有任何片断被上载给其它 peers。
1)严格的优先级(Strict Priority)
片断选择的第一个策略是:一旦请求了某个片断的子片断,那么该片断剩下的子片断优先被请求。这样,可以尽可能快的获得一个完整的片断。
2)最少的优先(Rarest First)
对一个下载者来说,在选择下一个被下载的片断时,通常选择的是它的peers们所拥有的最少的那个片断,也就是所谓的“最少优先”。这种技术,确保了每个下载者都拥有它的peers们最希望得到的那些片断,从而一旦有需要,上载就可以开始。这也确保了那些越普通的片断越放在最后下载,从而减少了这样一种可能性,即某个peer当前正提供上载,而随后却没有任何的被别人感兴趣的片断了。也就说说,每个peer都优先选择整个系统中最少的那些片断去下载,而那些在系统中相对较多的片断,放在后面下载,这样,整个系统就趋向于一种更优的状态。如果不用这种算法,大家都去下载最多的那些片断,那么这些片断就会在系统中分布的越来越多,而那些在系统中相对较少的片断仍然很少,最后,某些 peer 就不再拥有其它 peer 感兴趣的片断了,那么系统的参与者越来越少,整个系统的性能就下降。
在BT系统中,充分考虑了经济学的概念,处处从整个系统的性能出发,参与者越多,系统越优化。
信息理论显示除非种子上传了文件的所有片断,否则没有任何下载者可以完成所有文件的下载。如果在一个部署中,只有一个种子,而且种子的上载能力比它的大多数下载者都要差,那么,不同的下载者从种子那里下载不同的片断,性能就会变得比较好,因为,重复的下载浪费了种子获取更多信息的机会。“最少优先”使得下载者只从种子处下载新的片断(也就是整个系统中其它peer都没有的片断),因为,下载者能够看到其它peers那里已经有了种子已经上传的片断。
在某些部署中,原始的种子由于某些原因最终关闭,只好由剩下的这些下载者们来负责上传。这样显然会带来一个风险:某些片断任何一个下载者都不拥有。“最少优先”也很好的处理了这种情况。通过尽快的复制最少的片断,减少了这种由于当前的peers停止上载后带来的风险。
3)随机的第一个片断(Random First Piece)
“ 最少优先”的一个例外是在下载刚开始的时候。此时,下载者没有任何片断可供上传,所以,需要尽快的获取一个完整的片断。而最少的片断,通常只有某一个peer拥有,所以,它可能比多个peers都拥有的那些片断下载的要慢。因此,第一个片断是随机选择的,直到第一个片断下载完成,才切换到“最少优先”的策略。
4)最后阶段模式(Endgame Mode)
有时候,从一个速率很慢的peer那里请求一个片断。在下载的中间阶段,这不是什么问题,但是却可能潜在的延迟下载的完成。为了防止这种情况,在最后阶段,peer向它的所有的peers们都发送某片断的子片断的请求,一旦某些子片断到了,那么就会向其它peer发送 cancel 消息,取消对这些子片断的请求,以避免带宽的浪费。实际上,用这种方法并没有浪费多少带宽,而文件的结束部分也一直下载的非常快。
2、阻塞算法(Choking Algorithms)
BT 并不集中分配资源。每个peer自己有责任来尽可能的提高它的下载速率。Peers从它可以连接的peers处下载文件,并根据对方提供的下载速率给予同等的上传回报(你敬我一尺,我敬你一丈)。对于合作者,提供上传服务,对于不合作的,就阻塞对方。所以说,阻塞是一种临时的拒绝上传策略,虽然上传停止了,但是下载仍然继续。在阻塞停止的时候,连接并不需要重新建立。
阻塞算法并不属于BT对等协议(指peers 之间交互的协议)的技术部分,但是对提高性能是必要的。一个好的阻塞算法应该利用所有可用的资源,为所有下载者提供一致可靠的下载速率,并适当惩罚那些只下载而不上传的peers。
1)帕累托效率(Pareto Efficiency)
在经济学里,帕累托效率可以这样来定义:一种状态(资源配置、社会制度等)被称为帕累托最优状态,如果不存在另外一种可选择的状态使得没有任何人的处境变差而至少有一个人的处境变得更好。这意味着,当满足给定的约束条件,一种资源配置的状态已经使没有人能够按照自己的偏好在不损害别人的条件下变得更好,那么就是达到了帕累托最优状态。可以通俗地理解为,如果处于这种状态:除非损人,就不能利己,这就达到了帕累托最优。在计算机领域,寻求帕累托有效是一种本地优化算法BitTorrent的阻塞算法,用一种针锋相对的方式来试图达到帕累托最优。(原文不太好翻译,我简化了)。Peers对那些向他提供上传服务的peers给予同样的回报,目的是希望在任何时候都有若干个连接正在进行着双向传输。
2)BitTorrent的阻塞算法
从技术层面上说,BT的每个peer一直与固定数量的其它 peers 保持疏通(通常是4个),所以问题就变成了哪些peers应该保持疏通?这种方法使得TCP的拥塞控制性能能够可靠的饱和上传容量。(也就是说,尽量让整个系统的上传能力达到最大)。
严格的根据当前的下载速率来决定哪些peers应该保持疏通。令人惊讶的是,计算当前下载速率是个大难题。当前的实现实质上是一个每隔20秒的轮询。而原来的算法是对一个长时间的网络传输进行总计,但这种方法很差劲,因为由于资源可用或者不可用,带宽会变化的很快。
为了避免因为频繁的阻塞和疏通 peers造成的资源浪费,BT每隔10秒计算一次哪个peer需要被阻塞,然后将这种状态保持到下一个10秒。10秒已经足够使得TCP来调整它的传输性能到最大。
3)开放检测(Optimistic Unchoking)
如果只是简单的为提供最好的下载速率的peers们提供上载,那么就没有办法来发现那些空闲的连接是否比当前正使用的连接更好。为了解决这个问题,在任何时候,每个peer都拥有一个称为“optimistic unchoking”的连接,这个连接总是保持疏通状态,而不管它的下载速率是怎样。每隔30秒,重新计算一次哪个连接应该是“optimistic unchoking”。30秒足以让上传能力达到最大,下载能力也相应的达到最大。这种和针锋相对类似的思想非常的伟大。“optimistic unchoking”非常和谐的与“囚徒困境”合作。
4)反对歧视(Anti-snubbing)
某些情况下,一个peer可能被它所有的peers都阻塞了,这种情况下,它将会保持较低的下载速率直到通过 “optimistic unchoking”找到更好peers。为了减轻这种问题,如果一段时间过后,从某个peer那里一个片断也没有得到,那么这个peer认为自己被对方 “怠慢”了,于是不再为对方提供上传,除非对方是“optimistic unchoking”。这种情况频繁发生,会导致多于一个的并发的“optimistic unchoking”。
5)仅仅上传(Upload Only)
一旦某个peer完成了下载,它不能再通过下载速率(因为下载速率已经为0了)来决定为哪些 peers 提供上载了。目前采用的解决办法是,优先选择那些从它这里得到更好的上载速率的peers。这样的理由是可以尽可能的利用上载带宽。

原文地址:https://www.cnblogs.com/btlcmr/p/5408171.html